Skip to main content

Anchoring on Pricing Pages: Why Showing All Price Points Didn't Help (And What Would)

Anchoring theory predicted that showing lower price comparisons would lift conversion. It didn't. Here's why anchoring fails on experienced decisions — and what the research actually supports.

A
Atticus LiApplied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method
2 min read

Editorial disclosure

This article lives on the canonical GrowthLayer blog path for indexing consistency. Review rules, sourcing rules, and update rules are documented in our editorial policy and methodology.

Fortune 150 experimentation lead100+ experiments / yearCreator of the PRISM Method
A/B TestingExperimentation StrategyStatistical MethodsCRO MethodologyExperimentation at Scale

Anchoring is one of the most cited findings in behavioral economics. Tversky and Kahneman demonstrated in 1974 that people make numerical estimates by starting from an initial value and adjusting insufficiently from it. The CRO implication seems obvious: show users a higher price before showing your actual price, and the actual price will feel more reasonable.

Some of this work is legitimate. But anchoring has a condition that most practitioners skip over: it works reliably on novel decisions, where the decision-maker has no prior reference point. For experienced decisions, anchoring has a much weaker effect.

Why Anchoring Failed: The Prior Expectation Problem

We tested displaying estimated monthly costs at multiple usage levels on energy plan enrollment pages. The results were consistent: displaying additional price points produced no meaningful lift. Energy utility customers are not novel decision-makers. People who pay electricity bills have a deeply embedded prior expectation. Their reference point is not constructed from the page. It was formed through repeated direct experience.

When Anchoring Does Work

Software pricing is a genuine anchoring opportunity — users typically do not have a strong prior expectation. New product categories create anchoring opportunities because no prior expectation exists. Low-frequency purchases create anchoring opportunities because the user cannot immediately compare to a reliable internal reference.

What Would Have Worked Instead

The actual anchor was not on the page at all. It was in the user's memory of their last twelve monthly bills. The implication: work with the existing reference point rather than trying to replace it. Bill comparison framing, variable rate comparison, and historical price context all engage with the reference point users already hold.

When your anchoring test comes back flat, do not conclude that behavioral science failed. Conclude that you applied the right science to the wrong conditions. Then identify the conditions that actually apply.

If you're interested in how behavioral science principles hold up under real experimental conditions, How to Validate Behavioral Science Principles With A/B Testing covers which principles (including anchoring) reliably replicate in digital contexts and which show inconsistent results across programs.

About the author

A
Atticus Li

Applied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method

Atticus Li leads applied experimentation at NRG Energy (Fortune 150), where he and his team run more than 100 controlled experiments per year on customer-facing surfaces. He is the creator of the PRISM Method, a framework for high-velocity experimentation programs at large enterprises. He writes regularly about the statistical and operational details of A/B testing — the parts most CRO content skips.

Keep exploring