Skip to main content

Lean Startup Methodology for SaaS: Build-Measure-Learn Done Right

_By Atticus Li -- Applied Experimentation Lead at NRG Energy (Fortune 150). Creator of the PRISM Method. Learn more at atticusli.com._

A
Atticus LiApplied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method
8 min read

Editorial disclosure

This article lives on the canonical GrowthLayer blog path for indexing consistency. Review rules, sourcing rules, and update rules are documented in our editorial policy and methodology.

Fortune 150 experimentation lead100+ experiments / yearCreator of the PRISM Method
A/B TestingExperimentation StrategyStatistical MethodsCRO MethodologyExperimentation at Scale

_By Atticus Li -- Applied Experimentation Lead at NRG Energy (Fortune 150). Creator of the PRISM Method. Learn more at atticusli.com._

---

Eric Ries's _The Lean Startup_ came out in 2011. Fifteen years later, a remarkable number of SaaS teams still use its vocabulary -- MVP, pivot, build-measure-learn, validated learning -- while practicing something that looks nothing like what Ries actually proposed.

The vocabulary stuck. The discipline did not.

I have worked with teams that ran "lean" for years and never produced a single piece of validated learning, because they were measuring the wrong things and pivoting based on feelings. I have also seen teams that never used the word "lean" but practiced its actual methodology -- tight build-measure-learn loops, rigorous experimentation, honest evaluation of evidence -- and compounded their product advantage every quarter.

The difference is not the vocabulary. It is the measurement.

Reading the primary sources again -- Ries's _Lean Startup_, Steve Blank's _Four Steps to the Epiphany_ and _The Startup Owner's Manual_, the Y Combinator material, Alistair Croll and Benjamin Yoskovitz's _Lean Analytics_ -- one conclusion keeps emerging:

The Build-Measure-Learn loop only produces learning when the "Measure" step is designed before the "Build" step, with clear predictions and a pre-committed decision rule. Most teams do Build, then Measure whatever is easy, then Learn whatever they wanted to believe. That is not lean. That is confirmation bias with a methodology sticker on it.

This post is about doing Build-Measure-Learn the way it was meant to be done, adapted for SaaS.

What Build-Measure-Learn Actually Means

The textbook description: take an idea, build an MVP, measure how users respond, learn what to do next. The loop iterates. Speed of iteration is the competitive advantage.

The textbook description is incomplete. The actual discipline has four parts, and skipping any of them breaks the loop:

  1. State a falsifiable hypothesis. Not "users want this feature." Something like: "At least 30% of users who encounter this feature in context will complete the flow within 7 days."
  2. Pre-commit to a decision rule. Before building anything: if we observe X, we persevere; if we observe Y, we pivot; if we observe Z, we kill. Write it down.
  3. Build the cheapest thing that can test the hypothesis. Not the cheapest version of the feature -- the cheapest _test_ of the hypothesis. Often these are different.
  4. Measure against the pre-registered metric and apply the pre-committed decision rule. Without hedging. Without reinterpreting after seeing the result.

If you are not doing all four, you are not running Build-Measure-Learn. You are building, hoping, and rationalizing.

The MVP Problem (What Most Teams Get Wrong)

The most common lean-startup misunderstanding: treating "MVP" as a synonym for "version one" of the product.

The MVP is not the first real version of the product. The MVP is the smallest thing that tests the riskiest assumption. Sometimes that is a feature. Sometimes that is a landing page. Sometimes that is a manually-delivered service behind a product-shaped UI. Sometimes that is a prototype that cannot be used at scale.

The question is: what is the single biggest assumption you are making about your users or your market, and what is the cheapest way to get evidence on whether that assumption is true?

Common MVP Mistakes

  • Building a scaled-down version of the full product. If you build 20% of the feature set, you have tested whether users like 20% of the feature set, not whether the core value proposition works. Your "MVP" is a smaller product, not an experiment.
  • Choosing the MVP based on engineering ease rather than learning value. "Let's ship what we can in two weeks" is not a hypothesis. "Let's test whether users will pay for X" is.
  • Polishing the MVP to feel production-grade. If the point is to learn fast, polish is overhead.
  • Measuring the wrong response. Did users click? Did they use the feature once? None of that tells you whether the underlying assumption holds. You usually want a measure of sustained behavior or willingness to pay.

What a Good SaaS MVP Looks Like

A good MVP makes three things happen:

  1. It puts the core value proposition in front of real users as quickly as possible.
  2. It is instrumented to measure the specific behavior the hypothesis predicts.
  3. It is disposable. You can throw it away and rebuild properly if the learning supports it, without the sunk-cost trap.

Wizard-of-Oz MVPs (manually delivering what looks like an automated service), concierge MVPs (delivering the service in person to a small number of users), and smoke tests (landing pages that test willingness to sign up or pay before the product exists) are all legitimate. Ship whatever tests the assumption fastest.

Vanity Metrics Are Still Killing SaaS Teams

Fifteen years after _Lean Analytics_, SaaS teams still report and celebrate metrics that tell them nothing about whether the business is working.

Vanity metrics are metrics that go up regardless of whether the product is doing its job:

  • Total signups
  • Total users
  • Pageviews
  • "Engagement" defined as any session
  • Feature adoption without retention context

Actionable metrics are metrics tied to behavior that predicts business outcomes:

  • Activation rate (signup to first successful action in a defined window)
  • Cohort retention curves (what percentage of a cohort is still active in month N)
  • Time to value
  • Net revenue retention
  • LTV:CAC with payback period

If the dashboard you present to the team every week is dominated by vanity metrics, you are not running lean. You are running a comfort operation.

The actionable alternative is cohort-based metrics. Cohorts let you see whether the product is actually getting better, whether a recent change helped, and whether growth is real or inflationary.

The Pivot Trap

"Pivot" is the most misused word in startup vocabulary. Ries used it narrowly: a structured change in strategy based on validated learning, where most of what the team has built carries forward.

Most teams use "pivot" to mean: "we changed our mind based on how we feel about the market this quarter."

Legitimate Pivots

Ries catalogued ten types. The ones I see working most often in SaaS:

  • Customer segment pivot. The problem is real and the solution works, but you were serving the wrong customer.
  • Customer need pivot. Same customer, different problem to solve.
  • Channel pivot. Same customer, same product, different route to reach them.
  • Business architecture pivot. Same product, change from enterprise to SMB or vice versa.
  • Engine-of-growth pivot. Switching from viral to paid, paid to sales, etc.

Illegitimate "Pivots"

  • "We pivoted because fundraising was hard for our original idea." This is abandonment, not pivoting.
  • "We pivoted because the founders got excited about a new space." This is distraction, not pivoting.
  • "We pivoted because we had three bad quarters." This is panic, not pivoting.

Real pivots are driven by validated learning that the original hypothesis does not hold, with evidence, and with an explicit handover of what the team has learned and what it will carry forward. If there is no evidence and no carryover, it is a restart -- which is a valid choice, but do not dress it up.

Validated Learning in Practice

A unit of validated learning has three parts:

  1. A specific assumption that was in doubt.
  2. Evidence gathered in a structured way that supports or refutes the assumption.
  3. A decision made on the basis of that evidence.

Most team "learnings" fail on all three. Nobody remembered exactly what was being tested. The evidence was anecdotal or selective. The decision was made before the learning arrived.

The practice that tests well in real teams: write the assumption and the decision rule down _before_ you build. Review the evidence against those pre-committed criteria _once_. Record the decision. Move on.

Experiment tracking systems exist to make this discipline easier and harder to skip. I wrote about why "run more tests" is not the answer -- the answer is to produce more validated learning per test, and that requires pre-registration.

A Framework for Running Lean Startup in SaaS

  1. Identify the riskiest assumption. Not the whole plan -- the single assumption whose failure would most undermine the business.
  2. Write a falsifiable hypothesis about that assumption. With a specific predicted metric and threshold.
  3. Pre-commit to a decision rule. Persevere, pivot, or kill -- with criteria.
  4. Design the cheapest test of the hypothesis. Often not a feature. Often a landing page, a wizard-of-Oz, a concierge prototype.
  5. Build only what is needed to run the test. No more.
  6. Instrument for the specific metric that tests the hypothesis. Not for everything you could measure.
  7. Run the test long enough to get a signal. Pre-calculated sample size where applicable.
  8. Apply the pre-committed decision rule. Without hedging.
  9. Document the validated learning. Hypothesis, evidence, decision, forward actions.
  10. Repeat on the next riskiest assumption.

Lean Startup Experiment Checklist

  • [ ] Riskiest assumption explicitly identified and written down
  • [ ] Hypothesis is falsifiable with a specific predicted metric and threshold
  • [ ] Decision rule pre-committed: persevere / pivot / kill criteria
  • [ ] MVP is the cheapest test of the hypothesis, not a small version of the product
  • [ ] Instrumentation measures the specific hypothesis-testing behavior
  • [ ] Actionable metric selected over vanity metric
  • [ ] Test duration long enough for signal, pre-calculated where applicable
  • [ ] Decision applied against pre-committed rule without reinterpretation
  • [ ] Validated learning documented: hypothesis, evidence, decision, carryforward
  • [ ] Next-iteration hypothesis already identified

Common Lean Startup Mistakes

  • Treating MVP as "version one." Test the assumption, not the future product.
  • Measuring vanity metrics. Signups are not validated learning.
  • Reinterpreting decision rules after seeing results. Pre-registration is the whole point.
  • Calling every change a pivot. Pivots have carryover; restarts do not.
  • Confusing "lean" with "cheap." Lean is disciplined hypothesis testing. Cheapness is one consequence of that discipline, not the definition.
  • Running the loop without writing anything down. Validated learning that exists only in someone's memory is not validated.

The Bottom Line

Build-Measure-Learn is still the right methodology for most early-stage SaaS work. The failure mode is not the methodology -- it is the practice.

The teams that run it well share three habits: they pre-register hypotheses and decision rules, they use actionable (cohort-based) metrics, and they treat each loop iteration as a unit of documented learning rather than a vibe shift. The teams that run it badly use the vocabulary and skip the discipline.

If your team is running Build-Measure-Learn loops and losing the thread on what you have actually validated, that is the exact problem I built GrowthLayer to solve. But tool or no tool, the principle stands: write the hypothesis down, commit to the decision rule, measure what the hypothesis actually predicts, and be honest when the evidence says you were wrong.

---

_Atticus Li leads enterprise experimentation at NRG Energy and advises SaaS companies on applying lean startup methodology to product decisions. Hypothesis pre-registration and validated-learning discipline are core components of his PRISM framework. Learn more at atticusli.com._

About the author

A
Atticus Li

Applied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method

Atticus Li leads applied experimentation at NRG Energy (Fortune 150), where he and his team run more than 100 controlled experiments per year on customer-facing surfaces. He is the creator of the PRISM Method, a framework for high-velocity experimentation programs at large enterprises. He writes regularly about the statistical and operational details of A/B testing — the parts most CRO content skips.

Keep exploring