How to Increase SaaS Customer Lifetime Value: The 3 Levers That Compound
_By Atticus Li -- Applied Experimentation Lead at NRG Energy (Fortune 150). Creator of the PRISM Method. Learn more at atticusli.com._
Editorial disclosure
This article lives on the canonical GrowthLayer blog path for indexing consistency. Review rules, sourcing rules, and update rules are documented in our editorial policy and methodology.
_By Atticus Li -- Applied Experimentation Lead at NRG Energy (Fortune 150). Creator of the PRISM Method. Learn more at atticusli.com._
---
Customer Lifetime Value is the most-requested and least-understood number in most SaaS businesses I have worked with.
Boards ask for it. Finance slides quote it. Paid acquisition teams use it to justify CAC. Growth teams treat it as the north-star metric they are supposedly moving. Almost nobody can explain what went into the number they are quoting, and almost nobody can point to a specific decision that moved it.
That is because CLV is not a lever. It is a scoreboard.
The widely-cited SaaS research -- Patrick Campbell's retention work at ProfitWell and Paddle, David Skok's SaaS metrics framework, the OpenView and Bessemer Cloud benchmarks, Reforge's retention engine material, Lenny Rachitsky's growth deep-dives -- converges on a simple point that SaaS teams keep re-learning the hard way:
You do not increase CLV. You increase the three upstream levers whose compound effect becomes CLV -- activation, retention, and expansion -- through structured experimentation over time.
Everything else in this post is how to do that.
What CLV Actually Means (and How to Calculate It)
Customer Lifetime Value is the total gross profit you expect from an average customer over their relationship with your product.
The simplest usable formula:
```
CLV = (ARPU × Gross Margin) / Monthly Churn Rate
```
Where:
- ARPU = average revenue per user per month
- Gross Margin = revenue minus cost of goods sold, as a percentage (SaaS is typically 70-80%)
- Monthly Churn Rate = percentage of customers who cancel each month
A SaaS product with $100 ARPU, 75% gross margin, and 3% monthly churn has a CLV of (100 × 0.75) / 0.03 = $2,500.
This formula is enough to start making decisions. It has limitations -- it assumes a flat churn rate, it ignores expansion revenue, it does not account for the fact that enterprise customers behave differently from SMB -- but it beats not calculating CLV at all.
For a more accurate picture, use cohort-based CLV: sum the expected revenue from a cohort of customers over a defined window (24 or 36 months), weighted by observed survival. This reflects real-world decay patterns and expansion behavior. If you can calculate both, do. The gap between simple CLV and cohort CLV is often significant, and the cohort version is typically more honest.
The Only LTV:CAC Ratio You Should Care About
The commonly-cited target is LTV:CAC of at least 3:1. Best-in-class SaaS companies in the public benchmarks run 5:1 or better.
But the ratio alone is a trap. A 4:1 ratio with 36-month payback is a worse business than a 3:1 ratio with 12-month payback. Always look at CAC payback period alongside LTV:CAC. Cash is the constraint, not the ratio.
With the metric defined, here is what actually moves it.
The Three Compound Levers
CLV compounds from three upstream levers. You can only test and improve each one individually. All three work together, and weakness in any one of them caps the other two.
- Activation -- how many new signups reach first successful action
- Retention -- how many activated customers stay paying
- Expansion -- how much each retained customer grows their spend
You cannot test "CLV". You can test every input into each of these three.
Lever 1: Activation
Activation is the foundation. Unactivated signups churn at near-100% rates, and they drag every CLV number down because they are counted as customers long before they behave like customers.
I wrote a full post on SaaS customer onboarding best practices covering this lever in detail. The compressed version:
- Activation is the percentage of signups who reach a first successful action in a defined window (typically 7 days)
- The highest-leverage principle is to continuously remove steps between signup and that first action
- Published research and test data consistently show that shorter time to value correlates with higher retention
If your activation rate is below 30-40%, no retention tactic will save your CLV. Fix activation first.
Lever 2: Retention
Retention is where CLV gets made or lost. A one-point change in monthly churn can move CLV by 20-40% in most models, because the effect compounds.
The places retention consistently breaks:
The Activation-to-Habit Gap
Users who activate but never build a habit churn within the first 1-2 months. The gap between first successful action and habitual use is where most of this churn happens.
What has tested well in this window:
- Usage-triggered check-ins. Behavioral emails when users go silent after initial activity, specifically asking about blockers rather than pitching features.
- Reintroduction of key features in context. Not a tour -- a prompt that appears when the user is in a situation where the feature would help.
- Second-use reminders tied to user-declared intent. If they signed up to do X, remind them when they have not done X in 2 weeks.
The Involuntary Churn Problem
A surprising share of churn in most SaaS businesses is involuntary -- failed payments, expired cards, billing errors. ProfitWell's retention research has repeatedly shown that involuntary churn can account for 20-40% of total churn in mid-market SaaS, and the best-in-class SaaS companies reclaim a significant fraction of it.
The specific interventions that tend to work:
- Card-updater integrations with Stripe/Adyen
- Pre-dunning (reminders before the card expires, not after)
- Dunning sequences that escalate smartly (in-app, then email, then a final grace period)
- Clear recovery flows when payments fail
This is one of the highest-ROI retention projects available to most SaaS teams, and most of them are not doing it.
The Contract Renewal Moment
For any product with an annual contract, the renewal conversation is a retention make-or-break. What tends to work:
- Renewal outreach 90 days ahead, not 30. Thirty days is too late to address dissatisfaction.
- Value reporting ahead of renewal. An auto-generated summary of what the customer achieved with the product in the last 12 months.
- Proactive discovery of expansion opportunities. Renewal conversations that only defend existing spend leave expansion on the table.
Cancellation Friction (Done Ethically)
Cancel flows that ask for a reason, offer alternatives, and make pause options available tend to reclaim meaningful retention without being dark-pattern. The principle: make cancellation easy, but use the cancellation moment to surface alternatives (pause, downgrade, feature change) the user might not know exist.
Lever 3: Expansion
Expansion is where CLV stops being a retention game and starts being a compounding growth game. Net revenue retention (NRR) above 100% means your customer base grows in value even if you stop acquiring new customers. The best public SaaS companies run NRR of 120-130%+.
Expansion comes from four mechanics, and each of them is testable:
Seat Expansion
Additional seats in teams. The bottleneck is almost always invitation friction and permission complexity. Test the invitation flow the same way you test signup -- measure every step, remove anything that does not increase the chance of a completed invite.
Usage-Based Expansion
If your pricing has a usage component (API calls, events, stored data, minutes), the lever is helping the customer consume more of what they are paying for. That sounds backwards, but customers who use more value the product more, and value creates the justification for larger plans.
Plan Upgrades
The classic upsell. The patterns that tend to work: usage-based upgrade triggers ("you have used 80% of your plan limit"), value-based upgrade prompts ("teams your size unlock X on the Growth plan"), and usage reporting that makes the upgrade feel earned rather than pushed.
Cross-Sell
New products or modules bought by existing customers. This is the longest-cycle expansion mechanic and usually requires sales involvement, even in PLG companies.
NRR is the metric that captures all of this. If NRR is flat or below 100%, your CLV has a ceiling no retention work alone can raise.
Common CLV Mistakes I See Regularly
- Quoting a single CLV number. CLV varies dramatically by cohort (acquisition channel, company size, use case). The aggregate number hides where the leverage actually is. Always segment.
- Optimizing CLV through price increases alone. Price increases can lift CLV in the short term and destroy it in the long term by driving up churn. Model the net effect before pulling this lever.
- Ignoring the activation lever because it is upstream. Teams focused on "retention" often skip the activation work that makes retention possible. Unactivated signups are where the largest CLV losses live.
- Treating CLV as static. CLV should be recalculated at least quarterly using rolling cohort data. The number from two years ago is not the number you should be quoting now.
- Running retention experiments without pre-defined activation and expansion guardrails. A retention change that hurts activation or expansion can look like a win locally while destroying CLV at the aggregate level.
A Framework for Prioritizing CLV Work
When CLV is underperforming, work the three levers in a specific order:
- Segment the problem. Break CLV down by cohort -- channel, plan, company size, use case. The aggregate number is never where the leverage is.
- Find the weakest lever in each segment. Is the weakness activation, retention, or expansion? The answer differs by segment.
- Work activation first if activation rate is below 30-40%. Nothing downstream works if activation is broken.
- Attack involuntary churn second. Highest ROI retention project in most SaaS businesses.
- Build expansion mechanics third. Expansion compounds over time, so start earlier than feels necessary.
- Measure through structured experiments. Every significant change through a test, with activation and expansion guardrails on every retention test (and vice versa).
CLV Experiment Checklist
Before running any test intended to move CLV:
- [ ] Segment defined (channel/plan/size/use case) -- no aggregate-only tests
- [ ] Specific lever identified (activation / retention / expansion)
- [ ] Hypothesis written: "Changing X will move Y because Z"
- [ ] Primary metric: the direct upstream metric (activation rate, monthly retention, NRR), not CLV itself
- [ ] CLV impact modeled from the primary-metric change
- [ ] Guardrail metrics on the other two levers
- [ ] Sample size pre-calculated from baseline and MDE
- [ ] Test duration long enough to capture retention signal (often 60-90 days, sometimes more)
- [ ] AA test run if instrumentation changed
- [ ] Results documented -- and fed back into the CLV model
The Bottom Line
CLV is not a metric you move. It is a number that moves when you correctly identify which upstream lever is holding it back, and you systematically test changes to that lever.
The SaaS companies that compound CLV do three unglamorous things consistently: they keep activation rates high, they treat retention as a portfolio of small interventions run as experiments, and they build expansion mechanics into the product rather than bolting them on as afterthoughts.
If your team is running retention and expansion tests and losing track of which changes actually moved the levers, that is the exact problem I built GrowthLayer to solve. But tool or no tool, the principle stands: CLV is the scoreboard. Activation, retention, and expansion are the game. Stop optimizing the scoreboard.
---
_Atticus Li leads enterprise experimentation at NRG Energy and advises SaaS companies on activation, retention, and expansion. Cohort-based CLV modeling is a core component of his PRISM framework. Learn more at atticusli.com._
Applied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method
Atticus Li leads applied experimentation at NRG Energy (Fortune 150), where he and his team run more than 100 controlled experiments per year on customer-facing surfaces. He is the creator of the PRISM Method, a framework for high-velocity experimentation programs at large enterprises. He writes regularly about the statistical and operational details of A/B testing — the parts most CRO content skips.
Keep exploring
Browse winning A/B tests
Move from theory into real examples and outcomes.
Read deeper CRO guides
Explore related strategy pages on experimentation and optimization.
Find test ideas
Turn the article into a backlog of concrete experiments.
Back to the blog hub
Continue through related editorial content on the main domain.