Skip to main content

The Paradox of Choice Doesn't Apply Here: When More Options Beat Fewer Options in Conversion Optimization

"Recommended Plans" failed in every test. Users preferred seeing all options. Here's why the Paradox of Choice doesn't apply to high-consideration purchases — and what does.

A
Atticus LiApplied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method
13 min read

Editorial disclosure

This article lives on the canonical GrowthLayer blog path for indexing consistency. Review rules, sourcing rules, and update rules are documented in our editorial policy and methodology.

Fortune 150 experimentation lead100+ experiments / yearCreator of the PRISM Method
A/B TestingExperimentation StrategyStatistical MethodsCRO MethodologyExperimentation at Scale

There is a piece of received wisdom in the conversion rate optimization world that has been repeated so many times it has become doctrine: reduce choices to increase conversions. Simplify your pricing table. Highlight a recommended option. Remove the decision burden and watch your numbers climb.

I tested this premise rigorously, across multiple funnel configurations, on a high-consideration purchase flow. The result was the same every time. The "Recommended Plans" concept — reducing choices by surfacing a curated preferred option — failed. The full product chart, with every available option visible, consistently outperformed the curated subset. Session replay data made the behavior concrete: users were scrolling the full list, evaluating options side by side, and selecting based on a comparison they could only make if all options were present.

This is a direct contradiction of one of the most famous findings in consumer psychology. Barry Schwartz's paradox of choice, popularized in his 2004 book and grounded in earlier work by Sheena Iyengar and Mark Lepper, holds that more options produce more anxiety, more regret, and fewer decisions. The jam study — where shoppers exposed to a larger jam display were less likely to purchase than those shown a smaller selection — is probably the most cited empirical result in all of behavioral economics.

So why did more options win in my tests? And more importantly, when does the paradox of choice apply, and when does it not?

The Test Results That Challenged the Conventional Wisdom

The context was a plan selection page for a high-consideration service purchase. Users arriving at this page had already progressed through an awareness and consideration phase — they understood what they were buying and had some intent to purchase. The question was which plan best fit their situation.

The original design showed a curated set of plans with one highlighted as "recommended." The hypothesis for testing was standard CRO thinking: reduce cognitive load, guide users toward the most popular option, simplify the decision.

We ran this against a control showing the full product comparison — all plans, all pricing tiers, all contract lengths, with the full set of attributes visible for each option.

The full comparison won. Not marginally — clearly and consistently.

Session replay data provided the explanation. Users on the curated variant were not less stressed by the decision; they were abandoning it. When they could not find a plan that matched their specific usage profile, they left. Users on the full comparison variant were spending more time on the page, scrolling through options, and selecting with apparent confidence. The additional time was not hesitation — it was evaluation. They were doing the work they needed to do to make a decision.

One qualitative observation from session replay stands out: users would frequently scroll to a plan that looked appropriate, then scroll back to another to compare a specific attribute — price per usage tier, contract term, or a feature differentiator. This is not the behavior of someone experiencing choice overload. This is the behavior of a motivated decision-maker completing a comparison task.

Key Takeaway: When users are making attribute-based comparisons, removing options does not reduce cognitive load — it removes the information they need to decide. Curation that eliminates relevant options creates abandonment, not simplicity.

Resolving the Contradiction: System 1 vs System 2 Purchases

The apparent contradiction between these results and the paradox of choice literature dissolves when you look at the type of decision being studied.

Daniel Kahneman's dual-process theory describes two modes of cognition. System 1 thinking is fast, automatic, and heuristic-driven. System 2 thinking is slow, deliberate, and analytical. Most consumer behavior research — including the jam study — studies System 1 contexts: low-involvement products, impulse purchases, or casual choices where consumers have no strong prior preference and are not doing analytical comparison work.

Jam is a System 1 purchase. You pick a flavor you like or one that looks appealing. You are not comparing price per ounce at multiple usage volumes against contract length. You are not calculating an expected annual cost. There is no attribute matrix that benefits from having more data points. The decision is "which one feels right?" and more options make that heuristic process harder, not easier.

Energy plans are a System 2 purchase. Users are comparing price per unit at multiple usage tiers, early termination fees, contract durations, and product features. They have a specific household usage profile in mind. They are doing arithmetic. The decision is "which one is objectively best for my situation?" and for that kind of decision, more information is genuinely useful. Curation that removes options makes the task harder, not easier, because it may have eliminated the option that was actually the best fit.

The critical insight here is that the paradox of choice is a finding about System 1 decisions that has been overgeneralized to all purchase decisions. It was never intended to describe high-consideration, attribute-based comparisons. In fact, the literature on constructive preference — the work of Bettman, Luce, and Payne — suggests something almost opposite: that for complex decisions, the decision strategy itself is constructed during the choice process, and richer choice environments actually help consumers form clearer preferences rather than muddier ones.

The Iyengar Nuance Most CRO Practitioners Miss

Sheena Iyengar, whose jam study with Lepper is the empirical cornerstone of the paradox of choice, has written and spoken extensively about the limits of her own findings. Her work distinguishes between choice overload as a context-dependent phenomenon, not a universal law.

In her research on 401(k) enrollment, she found that adding investment options to a retirement plan actually increased confusion and decreased enrollment — not because more choices are always bad, but because the decision context (long-term financial planning with unfamiliar instruments) created exactly the combination of high stakes, low expertise, and no clear decision strategy that produces paralysis. That is a specific set of conditions: high stakes, low expertise, and undifferentiated options.

High-consideration service purchases can share the "high stakes" attribute, but they often differ on the other two. Users who have progressed to a plan selection page have typically developed at least a working model of the product category. They have a reason for being there. And the options, crucially, are not undifferentiated — they differ on attributes that the user can evaluate directly against their own known situation.

When options are differentiated and users are capable of evaluating those differences, more choice is not burdensome. It is informative.

Key Takeaway: The paradox of choice is a finding about undifferentiated options in low-involvement contexts. For high-consideration purchases with clearly differentiated attributes, more options enable comparison rather than creating paralysis.

Understanding why the curated approach failed requires understanding what it was actually doing to the user experience.

When you present a "recommended" plan, you are making a claim on behalf of the user: "We think this is the right choice for you." That claim is credible only if the user believes the recommendation is based on information about their specific situation. Generic recommendations — not personalized to the user's usage profile or stated preferences — are perceived as arbitrary or sales-driven. Users who see a recommendation they cannot evaluate against their own needs tend to do one of two things: ignore it and look for the full option set (which the curated design had removed), or distrust it and abandon the page.

The users in our session replay data were doing exactly the first thing. They were looking for information the curated design had hidden from them. When they could not find it, they left.

There is also a trust dimension here. In categories where users have experience with aggressive sales tactics — energy, insurance, financial services — a vendor-selected "recommended" option triggers skepticism rather than gratitude. Users reasonably wonder whether the recommendation reflects their best interest or the seller's margin. This suspicion is not irrational; it is learned behavior from category experience.

Contrast this with a context where personalized recommendations do work: a streaming service that recommends content based on your viewing history, or an e-commerce platform that surfaces products based on your purchase behavior. In these cases, the recommendation is based on demonstrable knowledge of the individual user, and the stakes of a wrong recommendation are low. The user can easily override it. Trust is earned by accuracy, not claimed by label.

In our plan selection context, the recommendation was neither personalized nor low-stakes. It was a generic curation imposed on a high-consideration decision. Users saw through it.

When Curation Does Work: The Conditions That Matter

I want to be careful not to overcorrect from one piece of advice to its opposite. The question is not "always show all options" or "always curate." The question is: what are the conditions under which each approach serves the user?

Curation works when:

  • The decision is low-consideration and heuristic-driven. Casual purchases where the primary driver is preference or aesthetics, not analytical comparison, benefit from reduced choice. Users are not doing attribute comparisons; they are forming gestalt impressions.
  • Recommendations are genuinely personalized. If you know a user's preferences, usage history, or stated needs, a recommendation based on that data earns trust. Generic recommendations do not.
  • Options are genuinely undifferentiated. If the options differ only in superficial ways and all of them would serve the user's needs equally well, curation reduces noise without removing useful signal.
  • The user has low domain expertise. Newcomers to a category who do not know how to evaluate attributes may benefit from a recommended starting point, provided it is accompanied by enough explanation to build understanding over time.

Showing all options works when:

  • The decision is high-consideration and analytical. When users are doing attribute-based comparisons against a specific personal situation, full option visibility enables the decision rather than complicating it.
  • Options are meaningfully differentiated. Different price points, different contract terms, different feature sets — these differences matter to users and cannot be adequately captured by pointing at one option.
  • The category has established distrust of vendor recommendations. In contexts where users have reason to believe recommendations favor the seller, full transparency is a trust-building signal.
  • Users are doing their own optimization. Users who arrive with a specific need — a usage target, a budget ceiling, a contract preference — are running their own optimization algorithm. Give them the data to run it.

The energy plan context where I ran these tests hits every item in the second list. Users are comparing price at multiple usage tiers, against their own household consumption data, with different contract durations that affect both value and flexibility. They know what they need to know. They want to do the comparison themselves. Removing options from that comparison is not simplifying their decision — it is crippling their ability to make it.

Key Takeaway: The choice between curation and full display is not a universal design principle — it is a context decision. Match your approach to the decision type, not to the received wisdom of your industry.

Implications for CRO Testing Practice

This finding has reshaped how I scope tests in complex purchase funnels, and it has influenced how I structure the test pipeline in GrowthLayer for clients in high-consideration categories.

Test your "simplification" assumptions before implementing them. The instinct to simplify is not wrong, but it is not always right. If you are considering reducing the number of options on a plan selection or product comparison page, test it rather than assume it will improve conversion. Measure both short-term conversion and downstream engagement — simplified choices that produce quick conversions but poorly matched customers are a long-term problem.

Instrument the decision process, not just the outcome. Session replay and interaction analytics on a choice page will tell you things that conversion rates alone cannot. Are users scrolling through all options? Are they bouncing directly? Are they returning multiple times before deciding? The behavioral pattern tells you whether users need more information or less.

Distinguish between simplification and clarity. These are not the same thing. Reducing the number of options is simplification. Making the attributes of existing options easier to understand is clarity. High-consideration buyers often need more clarity, not fewer options. Better attribute labels, cleaner comparison tables, and explicit usage-tier examples can dramatically improve decision confidence without removing the choices users need.

Be especially skeptical of curated experiences in high-distrust categories. Any category where consumers have historically experienced misaligned recommendations — energy, insurance, financial products, telecommunications — has users who have learned to distrust vendor curation. In these categories, full transparency is often a stronger conversion driver than streamlining, because it signals confidence and fairness rather than salesmanship.

I track these contextual nuances as hypothesis metadata in GrowthLayer, tagging each test with the decision type and purchase consideration level. Over time, that metadata reveals which assumptions hold in your specific context versus which received wisdom needs to be challenged.

The Constructive Preference Model

There is one more theoretical lens worth applying here, because it changes how you think about the role of the choice environment in shaping the decision itself.

James Bettman, Mary Frances Luce, and John Payne's work on constructive preference argues that for most complex decisions, consumers do not arrive with fully formed preferences that they are simply expressing through their choices. Instead, they construct their preferences during the choice process, using the available options as scaffolding for that construction.

This is a profound reframing. It means that the choice environment is not just a display of options — it is an input to the preference-formation process itself. When you remove options from that environment, you are not just removing choices. You may be removing the comparisons that would have helped the user understand what they actually value.

In our plan selection context, a user who has never explicitly thought about the tradeoff between a short contract at a higher rate versus a long contract at a lower rate may discover that preference through the act of comparison. Show them all the options in a clear comparison format and they will often surprise themselves by choosing something different from what they would have said if asked abstractly. The full option environment enabled that self-discovery. The curated environment prevented it.

This is one more reason why "simplifying" a high-consideration choice environment can backfire. You are not just limiting the decision; you are limiting the decision-making process that produces genuinely committed, well-matched customers.

Conclusion

The paradox of choice is real, empirically grounded, and applicable to a specific class of decisions: low-involvement, low-expertise, aesthetically-driven choices where options are largely undifferentiated. For that class of decisions, curation and recommendation genuinely help users, and the evidence supports reducing the choice set.

But it was never meant to be a universal law of design. And for high-consideration, attribute-based decisions — where users are motivated, are doing genuine comparison work, and are optimizing against a specific personal situation — more options, clearly presented, consistently outperform curated subsets.

The "Recommended Plans" concept failed not because recommendation is a bad idea in principle, but because it was the wrong tool for the decision type. It removed information users needed and replaced it with a claim of expertise that the recommendation did not earn. Full option display won because it trusted users to do the work they were already doing.

The lesson is not to abandon simplicity as a design value. It is to test your simplification assumptions, understand your decision type, and never apply a behavioral economics finding outside the context that generated it.

If you are managing a test program in a high-consideration category, [GrowthLayer](https://growthlayer.app) can help you track decision-type context as hypothesis metadata — so you build a pattern library that reflects your actual user behavior, not borrowed assumptions.

Key Takeaways

  • The paradox of choice applies to System 1 (heuristic, low-involvement) decisions. It does not automatically apply to System 2 (analytical, high-consideration) decisions.
  • For attribute-based purchases, users need all options to construct their preferences. Curation that removes relevant options creates abandonment, not simplicity.
  • Generic "recommended" labels trigger distrust in high-stakes, high-distrust categories. Personalized recommendations based on user data are a different matter.
  • Simplicity and clarity are not the same thing. High-consideration buyers often need more clarity about existing options, not fewer options overall.
  • Test your simplification hypotheses before implementing them. The instinct to reduce is not always right.

Frequently Asked Questions

Doesn't too much choice always create decision fatigue?

Decision fatigue is real but context-dependent. It occurs when the decision-making process is prolonged without clear criteria for evaluation. For high-consideration purchases, clear attribute presentation — not option reduction — is the antidote. Users who know what they are evaluating and why do not experience the same fatigue as users who are trying to choose between undifferentiated alternatives.

How do you know when a purchase is "high-consideration" enough to warrant showing all options?

A useful heuristic is whether users are doing attribute-based comparisons against their own known situation. If users need to know their usage level, budget, or specific requirements to make the decision, it is high-consideration. If the decision is primarily aesthetic or preference-based without a specific personal variable, it is closer to the jam study context.

Could the "Recommended Plans" result have been a trust issue specific to the category?

Partially, yes. Categories with a history of misaligned recommendations create learned skepticism that makes any vendor-curated suggestion less credible. But even controlling for trust, the functional problem remains: for attribute-based decisions, removing options removes information. Both factors contribute to the result, and both are relevant to how you design choice environments.

What if my users are not expert enough to evaluate all the attributes?

Then the solution is education, not elimination. Explain what the attributes mean, provide usage examples, and help users understand how to apply the attributes to their own situation. This builds decision competence rather than bypassing it. Users who understand a complex decision space are more committed to their eventual choice than users who were guided to a selection they do not fully understand.

About the author

A
Atticus Li

Applied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method

Atticus Li leads applied experimentation at NRG Energy (Fortune 150), where he and his team run more than 100 controlled experiments per year on customer-facing surfaces. He is the creator of the PRISM Method, a framework for high-velocity experimentation programs at large enterprises. He writes regularly about the statistical and operational details of A/B testing — the parts most CRO content skips.

Keep exploring