SaaS Pivot or Persevere: How to Decide (Before You Need To)
_By Atticus Li -- Applied Experimentation Lead at NRG Energy (Fortune 150). Creator of the PRISM Method. Learn more at atticusli.com._
Editorial disclosure
This article lives on the canonical GrowthLayer blog path for indexing consistency. Review rules, sourcing rules, and update rules are documented in our editorial policy and methodology.
_By Atticus Li -- Applied Experimentation Lead at NRG Energy (Fortune 150). Creator of the PRISM Method. Learn more at atticusli.com._
---
The hardest decision in SaaS is not what to build. It is when to stop building what you have been building.
The teams that get this decision right have a specific discipline in place before the decision arrives. The teams that get it wrong are making the decision in the moment, based on fundraising mood, founder psychology, a single bad quarter, or a single good conversation with a prospect. None of those are evidence.
The literature I trust on this -- Eric Ries's original _Lean Startup_ taxonomy of pivot types, Steve Blank's Customer Development framework, the YC and First Round Review writing on founder decision-making, Marty Cagan's _Transformed_, Sean Ellis and Everett Taylor's growth material -- keeps landing on the same pattern:
Pivot-or-persevere is not a judgment call made in the moment. It is a decision pre-committed to evidence criteria, evaluated against those criteria, and executed without hedging. Every time a team makes this decision in the moment, they get it wrong more often than they admit.
This post is about installing the discipline before you need it.
The Problem with Pivoting in the Moment
Most pivot decisions happen in one of three emotional contexts:
- Post-bad-quarter panic. Revenue missed. Runway shortened. Board concerned. The team jumps to "maybe we need to pivot," and the conversation happens under pressure.
- Post-exciting-conversation euphoria. A prospect says something flattering about a new space. A pitch resonates at a conference. Suddenly "maybe we should be in X."
- Founder fatigue. The current space has been hard. Another space looks easier from here. The grass is greener.
None of these are evidence of whether to pivot. All of them produce the same pathology: decisions made on feelings, retrospectively rationalized as data-driven. Teams that make a lot of pivot decisions this way usually have one of two outcomes -- they pivot repeatedly without compounding learning, or they pivot once at the wrong moment and destroy the progress they already had.
The Core Principle: Pre-Commit to Decision Rules
The discipline that separates teams that pivot well from teams that pivot poorly: before you start a body of work, write down what evidence would cause you to pivot, persevere, or kill.
This is not a founder exercise to be done alone in a journal. It is a leadership commitment that goes into the operating plan. "We believe that this strategy will produce X results by Y date. If we observe Z, we will pivot. If we observe W, we will persevere. If we observe V, we will kill and restart."
The rigor here comes from pre-commitment. Evidence is much harder to spin when you wrote down in advance what you would do with it.
Without pre-commitment, every result is subject to interpretation. "Our signups slowed, but we think it is because of the season." "Our retention is low, but we think the product needs another quarter." "Our CAC is rising, but we are just in an investment phase." All of these might be true. Without pre-committed criteria, none of them are falsifiable.
The Pivot Types (Ries's Taxonomy, Applied to SaaS)
Eric Ries identified ten pivot types. The ones I see matter most in SaaS:
1. Customer Segment Pivot
The problem is real and your solution works, but for a different customer than you targeted. Classic example: a product built for enterprise that ends up winning with SMB, or vice versa.
Evidence that suggests this pivot: strong usage and retention in a customer segment you did not prioritize. Conversely, weak results in the segment you did prioritize despite shipping the right product.
2. Customer Need Pivot
You have the right customer but they want a different job done. You thought you were selling a project management tool; they are using it as a CRM. The job-to-be-done pivot.
Evidence: interviews reveal a different primary use case. Usage patterns do not match the intended use case. Retention is higher for a secondary use case than the primary one.
3. Channel Pivot
Same customer, same job, different route to market. You thought the channel was inbound content; the customers came from outbound. You thought PLG; they want sales-assisted. You thought partnerships; you need direct.
Evidence: CAC in the intended channel is uneconomical; CAC in an unplanned channel is strong.
4. Engine-of-Growth Pivot
Moving between the three fundamental growth engines: viral, sticky, paid. Most SaaS companies run a mix, but the primary engine drives the strategy. Recognizing that the primary engine is not what you assumed is a legitimate pivot.
5. Business Architecture Pivot
Moving between serving the high end and the low end of the market, or between high-margin/low-volume and low-margin/high-volume. Often follows from a customer segment pivot.
6. Platform Pivot
Moving from application to platform, or vice versa. Rarely the right move early, occasionally the right move at scale.
Illegitimate "Pivots"
- Abandonment dressed as a pivot. "The space was too hard so we went somewhere else." If nothing from the previous work carries forward, it is a restart, not a pivot.
- Chasing what is fundable. Pivoting because LLM wrapper companies are fundable this year. This is a marketing change, not a pivot.
- Distraction-driven pivots. The founder got excited about a new space. Usually a signal of something else -- burnout, lack of traction in the current space -- that a pivot will not fix.
Real pivots preserve most of what the team has learned and shipped. If the pivot requires throwing everything away, it is a restart, and you should be honest about that.
The Evidence Framework
What counts as evidence strong enough to drive a pivot decision?
- Quantitative signal, not anecdotes. A single bad prospect conversation is anecdote. A pattern across a pre-defined cohort is signal. Pre-define what cohorts and what time horizon counts.
- Statistical adequacy. A decision rule like "if conversion is below X" only means something if the sample size supports the inference. Underpowered evidence is not evidence.
- Multiple modality triangulation. If the pivot case is strong, it should show up in both behavioral data and customer interviews. If only one modality supports it, be skeptical.
- Pre-registered thresholds. The thresholds should be set in advance. Post-hoc threshold setting is storytelling.
- Guardrail against premature pivots. Agreed-upon minimum time horizons before the decision can be revisited. Twelve months is a common minimum for strategic decisions; less for tactical ones.
The Persevere Decision
Perseverance is the default, but it is also a decision that should be made explicitly. "We reviewed the evidence against our pre-committed criteria. The criteria for perseverance were met. We will continue the current strategy for another [period] and revisit."
Explicit perseverance is different from default perseverance. Default perseverance is just not making a decision. Explicit perseverance is a reaffirmation.
The Kill Decision
The rarest and most undervalued decision. Sometimes the right answer is not pivoting -- it is stopping altogether and redirecting the team to the next best use of their time.
Most teams resist this decision. Founders personally identify with the work. Boards do not want to write off the investment. Employees do not want to end the thing they have built. So the work continues past the point where the evidence says it should not.
A strong pivot-or-persevere framework includes explicit kill criteria. If the evidence shows X, you kill. Not pivot -- kill. The discipline here protects teams from the long tail of zombie projects that consume resources and produce nothing.
A Decision Framework for Pivot-or-Persevere
- Pre-register the strategy's assumptions. What is the hypothesis about market, customer, need, channel, growth engine?
- Pre-register the decision thresholds. For each assumption, what evidence would support pivot, persevere, or kill?
- Define the time horizon and sample size required for the decision to be valid.
- Run the strategy with instrumented metrics tied to the assumptions.
- Review against pre-committed criteria at the pre-committed time. Not earlier. Not later.
- Make the explicit decision: pivot (specify type), persevere (with what adjustments), or kill.
- Document the decision and what carries forward.
Pivot-or-Persevere Checklist
- [ ] Strategic assumptions explicitly written down
- [ ] Evidence thresholds for pivot / persevere / kill pre-registered
- [ ] Time horizon and sample size for valid decision pre-committed
- [ ] Instrumentation tied to each assumption
- [ ] Scheduled review date set in advance, not reactive
- [ ] Decision made against pre-committed criteria, not against current mood
- [ ] If pivot: type (Ries taxonomy) identified, carryforward documented
- [ ] If persevere: explicit with what adjustments
- [ ] If kill: team redirected cleanly, learnings documented
- [ ] Post-decision commitment period (minimum time before next review)
Common Mistakes
- Making the decision in the moment. Always worse than making it against pre-committed criteria.
- Spinning evidence to avoid the hard decision. The discipline works only if the evidence is taken at face value.
- Mistaking restart for pivot. Honest labeling matters.
- Pivoting too often. Each pivot consumes organizational energy. Too many pivots produces motion without progress.
- Not pivoting when you should. The reverse failure. Sunk cost, attachment, investor pressure, founder identity. All of these keep teams on bad strategies longer than the evidence warrants.
- Failing to document the learning. A pivot without documented learning is a restart that burned a year.
The Bottom Line
Pivot-or-persevere is a discipline, not a judgment. The teams that do this well have pre-committed criteria, structured review cadences, and the honesty to make explicit decisions in each direction -- including kill. The teams that do it poorly make the decision in the moment, rationalize afterward, and repeat the pattern.
This connects directly to the lean startup methodology discipline of pre-registering hypotheses and decision rules. The pivot decision is the biggest application of that discipline in a SaaS company.
If your team is running strategic experiments and losing track of what evidence was supposed to drive what decisions, that is the exact problem I built GrowthLayer to solve. But tool or no tool, the principle stands: decide what the evidence would say before you see the evidence. Then honor the decision.
---
_Atticus Li leads enterprise experimentation at NRG Energy and advises SaaS companies on strategic decision-making under uncertainty. Pre-committed decision criteria are a core component of his PRISM framework. Learn more at atticusli.com._
Applied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method
Atticus Li leads applied experimentation at NRG Energy (Fortune 150), where he and his team run more than 100 controlled experiments per year on customer-facing surfaces. He is the creator of the PRISM Method, a framework for high-velocity experimentation programs at large enterprises. He writes regularly about the statistical and operational details of A/B testing — the parts most CRO content skips.
Keep exploring
Browse winning A/B tests
Move from theory into real examples and outcomes.
Read deeper CRO guides
Explore related strategy pages on experimentation and optimization.
Find test ideas
Turn the article into a backlog of concrete experiments.
Back to the blog hub
Continue through related editorial content on the main domain.