Anchoring on Pricing Pages: Why Showing All Price Points Didn't Help (And What Would)
Anchoring theory predicted that showing lower price comparisons would lift conversion. It didn't. Here's why anchoring fails on experienced decisions — and what the research actually supports.
Editorial disclosure
This article lives on the canonical GrowthLayer blog path for indexing consistency. Review rules, sourcing rules, and update rules are documented in our editorial policy and methodology.
Anchoring is one of the most cited findings in behavioral economics. Tversky and Kahneman demonstrated in 1974 that people make numerical estimates by starting from an initial value and adjusting insufficiently from it. The CRO implication seems obvious: show users a higher price before showing your actual price, and the actual price will feel more reasonable.
Some of this work is legitimate. But anchoring has a condition that most practitioners skip over: it works reliably on novel decisions, where the decision-maker has no prior reference point. For experienced decisions, anchoring has a much weaker effect.
Why Anchoring Failed: The Prior Expectation Problem
We tested displaying estimated monthly costs at multiple usage levels on energy plan enrollment pages. The results were consistent: displaying additional price points produced no meaningful lift. Energy utility customers are not novel decision-makers. People who pay electricity bills have a deeply embedded prior expectation. Their reference point is not constructed from the page. It was formed through repeated direct experience.
When Anchoring Does Work
Software pricing is a genuine anchoring opportunity — users typically do not have a strong prior expectation. New product categories create anchoring opportunities because no prior expectation exists. Low-frequency purchases create anchoring opportunities because the user cannot immediately compare to a reliable internal reference.
What Would Have Worked Instead
The actual anchor was not on the page at all. It was in the user's memory of their last twelve monthly bills. The implication: work with the existing reference point rather than trying to replace it. Bill comparison framing, variable rate comparison, and historical price context all engage with the reference point users already hold.
When your anchoring test comes back flat, do not conclude that behavioral science failed. Conclude that you applied the right science to the wrong conditions. Then identify the conditions that actually apply.
Related Reading
If you're interested in how behavioral science principles hold up under real experimental conditions, How to Validate Behavioral Science Principles With A/B Testing covers which principles (including anchoring) reliably replicate in digital contexts and which show inconsistent results across programs.
Applied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method
Atticus Li leads applied experimentation at NRG Energy (Fortune 150), where he and his team run more than 100 controlled experiments per year on customer-facing surfaces. He is the creator of the PRISM Method, a framework for high-velocity experimentation programs at large enterprises. He writes regularly about the statistical and operational details of A/B testing — the parts most CRO content skips.
Keep exploring
Browse winning A/B tests
Move from theory into real examples and outcomes.
Read deeper CRO guides
Explore related strategy pages on experimentation and optimization.
Find test ideas
Turn the article into a backlog of concrete experiments.
Back to the blog hub
Continue through related editorial content on the main domain.