Skip to main content

SaaS MVP Development: How to Build the Right First Version (Not a Smaller Product)

_By Atticus Li -- Applied Experimentation Lead at NRG Energy (Fortune 150). Creator of the PRISM Method. Learn more at atticusli.com._

A
Atticus LiApplied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method
7 min read

Editorial disclosure

This article lives on the canonical GrowthLayer blog path for indexing consistency. Review rules, sourcing rules, and update rules are documented in our editorial policy and methodology.

Fortune 150 experimentation lead100+ experiments / yearCreator of the PRISM Method
A/B TestingExperimentation StrategyStatistical MethodsCRO MethodologyExperimentation at Scale

_By Atticus Li -- Applied Experimentation Lead at NRG Energy (Fortune 150). Creator of the PRISM Method. Learn more at atticusli.com._

---

"MVP" is the most misused three letters in SaaS.

In most teams I have worked with, "let's ship an MVP" is code for "let's ship a smaller version of the full product." The result is a smaller product, not a minimum viable anything. The team spent six weeks on what should have taken two. They tested the wrong thing. The launch did not produce learning -- it produced a soft launch.

The MVP is not a smaller product. It is the smallest thing that tests the riskiest assumption.

The research and practice I trust on this -- Eric Ries's original definition in _The Lean Startup_, Rob Fitzpatrick's _The Mom Test_, Dan Olsen's _The Lean Product Playbook_, Steve Blank's Customer Development material, Marty Cagan on product discovery -- converges on one idea:

A SaaS MVP is successful when it produces the learning you needed, as fast and cheaply as possible. It does not need to be usable at scale, visually polished, or even built out of real code. It needs to test the specific assumption whose failure would most damage the business.

This post is about doing SaaS MVP development the way it was meant to be done.

What "Minimum Viable Product" Actually Means

Ries's original definition: the MVP is the version of the product that enables a full turn of the Build-Measure-Learn loop with the minimum effort.

Read that carefully. It is not about shipping features. It is about enabling learning. The MVP is a learning vehicle, not a launch.

What follows from the definition:

  • The MVP should not try to be "viable" in the business sense. That comes later.
  • The MVP should not be polished. Polish is optimization for a product-market fit you have not established yet.
  • The MVP should not include secondary features. Every feature that is not testing the core assumption is noise.
  • The MVP should be disposable. If the learning suggests a different direction, you throw it away without sunk-cost pain.

This is why "MVP" as "smaller version of real product" is wrong. A smaller version of a real product carries assumptions you have not tested. It also creates sunk cost that biases future decisions.

The Question That Defines a Good MVP

Before you build anything, answer one question clearly: what is the single biggest assumption you are making about the business, and what is the cheapest way to get evidence on whether that assumption is true?

Some examples:

  • Assumption: SMB marketers will pay $100/month for automated campaign reporting. Test: A landing page and a manually-produced report delivered to 20 sign-ups. Measure willingness to pay.
  • Assumption: Developers will adopt a command-line tool for our specific workflow. Test: Ship the CLI with the minimum workflow, measure activation and repeat use.
  • Assumption: Users will invite teammates after experiencing value. Test: The smallest possible core workflow with the invite flow instrumented, nothing else.
  • Assumption: An AI-generated version of the workflow outperforms manual for most users. Test: Wizard-of-Oz where a human does the work behind an AI-shaped interface. Measure whether users prefer the output.

Each of these tests the specific assumption without building the full product. If the assumption fails, you saved months. If it holds, you have validated learning to invest behind.

The Four MVP Patterns That Actually Work

1. Wizard-of-Oz

The product looks automated. Behind the scenes, a human is doing the work. Users do not know. You learn whether the automated version would have value before you build the automation.

Classic examples: Zappos started this way (manually buying shoes from retail stores when customers ordered). Many modern AI companies do this for their first version, then automate as demand validates.

2. Concierge

Similar to Wizard-of-Oz but without pretending. You deliver the service manually to a small number of customers. They know it is manual. You learn whether the job is valuable and what the workflow really looks like.

Buffer started as a concierge service. Many consulting-to-SaaS transitions start here.

3. Smoke Test / Landing Page

A landing page describing a product that does not exist. Measure willingness to sign up or pay. Sometimes extended with a "sold out" or "waitlist" flow when interest materializes.

This one is contested -- critics argue that sign-ups on a landing page do not reliably predict actual usage. They are right that the signal is imperfect. They are wrong that it is worthless. Triangulated with interviews and small concierge deliveries, smoke tests are legitimate learning.

4. Single-Feature / Single-Workflow MVP

The real product, but shipped with only the single workflow that tests the core assumption. No secondary features. No polish beyond usable. Instrumented for the specific behavior that validates or invalidates the hypothesis.

This is the MVP pattern most teams think they are doing. They are usually not. The failure mode is shipping three workflows "because they all seem important." If three workflows all seem important, you do not have one riskiest assumption -- you have muddled thinking.

What to Build, What to Skip

In a SaaS MVP:

Build:

  • The single workflow that tests the hypothesis
  • Just enough authentication to track user identity
  • Just enough instrumentation to measure the specific behavior that validates the hypothesis
  • The minimum UI that lets the workflow be attempted
  • The billing flow if the hypothesis is about willingness to pay

Skip:

  • Secondary workflows
  • Settings, configurations, preferences
  • Role and permission systems
  • Admin dashboards
  • Scalability work beyond what 100 users can break
  • Visual polish beyond legible
  • Everything you think "users will expect"

The discipline of what to skip is usually harder than the discipline of what to build. Engineers, designers, and PMs all have instincts that pull toward completeness. Fighting those instincts is part of MVP discipline.

Measuring an MVP (Avoiding Vanity Metrics)

MVPs generate small numbers. Small numbers are easy to misinterpret.

What to measure:

  • Behavior that tests the hypothesis. If the hypothesis is about willingness to pay, measure payment. If about activation, measure activation. Do not measure what is easy to measure.
  • Cohort behavior rather than event counts. 20 users, 60% of whom did X, is more informative than 1000 anonymous clicks.
  • Qualitative interviews alongside quantitative signal. Talk to the MVP users. Understand the why behind the behavior.
  • Time-to-behavior, not just frequency. How fast did users reach the behavior the hypothesis predicted?

What not to measure:

  • Signup count as the primary metric
  • Time on site
  • Feature clicks without retention
  • Cumulative event totals without per-user cohort context

This connects to the lean startup methodology principle about vanity metrics: the MVP is the phase most vulnerable to misreading, because numbers are small and every piece of positive signal feels meaningful.

When to Stop and Commit to the Real Product

You have enough MVP learning to commit when:

  • The core hypothesis has clear signal (positive or negative) at the validity threshold you pre-committed to
  • You have interviewed enough users to understand the _why_ behind the signal
  • You have identified the next assumption worth testing
  • Your team has the pattern-recognition to predict the shape of the next version

If the signal is positive, the next version is the real product, informed by what you learned. If negative, you pivot or kill -- see pivot or persevere for the decision framework.

Common MVP Mistakes

  • Treating MVP as "version one of the product." The MVP is a test. The product is what comes after.
  • Choosing MVP scope by engineering ease. "Let's ship what we can in two weeks" is not a hypothesis. The scope is determined by what tests the assumption, not by what is convenient.
  • Polishing the MVP. If you have time to polish, you have time to test more assumptions. Redirect.
  • Measuring the wrong thing. A signup is not a validated assumption about willingness to pay. Measure what the hypothesis actually predicts.
  • Building for 1000 users when you have 20. Premature scaling. The scaling work changes when you have real product-market fit data.
  • Keeping the MVP around after you have learned. Tear it down, rebuild properly. The MVP codebase is a prototype, not a foundation.

A Framework for SaaS MVP Development

  1. Identify the riskiest assumption. The single assumption whose failure would most damage the business.
  2. Pick the MVP pattern. Wizard-of-Oz, concierge, smoke test, or single-workflow. Select for cheapest test of the assumption.
  3. Pre-commit to success criteria. What signal, what threshold, what time horizon?
  4. Build only what is needed to run the test. Ruthlessly cut scope.
  5. Instrument specifically. Measure the behavior that validates the hypothesis, not what is convenient.
  6. Run the test. Usually 4-12 weeks depending on the hypothesis.
  7. Interview alongside the data. Quantitative plus qualitative is more informative than either alone.
  8. Apply the pre-committed decision rule. Ship real product, pivot, or kill.
  9. Document the validated learning. What you learned, what carries forward.

MVP Checklist

  • [ ] Single riskiest assumption identified and written down
  • [ ] MVP pattern chosen (wizard-of-Oz / concierge / smoke test / single-workflow)
  • [ ] Success criteria pre-committed: signal, threshold, time horizon
  • [ ] Scope cut to the minimum that tests the assumption
  • [ ] Instrumentation specific to the hypothesis-testing behavior
  • [ ] Qualitative interview plan alongside the quantitative measurement
  • [ ] Pre-committed decision rule: ship / pivot / kill
  • [ ] Disposable mindset: willing to throw the MVP code away
  • [ ] Scaling and polish deferred until hypothesis is validated
  • [ ] Validated learning documented post-test

The Bottom Line

The MVP is not a product. It is a learning instrument. The SaaS teams that build MVPs well treat the MVP as a disposable test of the assumption whose failure would most damage the business. The teams that build MVPs poorly treat the MVP as a smaller version of the full product, shipping it six weeks later than they should have and learning less than they needed to.

If your team is running MVP-stage experiments and losing track of what you tested and what you learned, that is the exact problem I built GrowthLayer to solve. But tool or no tool, the principle stands: the MVP is the cheapest test of an assumption. Everything else is scope creep.

---

_Atticus Li leads enterprise experimentation at NRG Energy and advises SaaS companies on MVP discipline and validated learning. Hypothesis-first MVP scoping is a core component of his PRISM framework. Learn more at atticusli.com._

About the author

A
Atticus Li

Applied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method

Atticus Li leads applied experimentation at NRG Energy (Fortune 150), where he and his team run more than 100 controlled experiments per year on customer-facing surfaces. He is the creator of the PRISM Method, a framework for high-velocity experimentation programs at large enterprises. He writes regularly about the statistical and operational details of A/B testing — the parts most CRO content skips.

Keep exploring