Skip to main content

User Segmentation for SaaS: How to Find the Segments That Drive Retention

_By Atticus Li -- Applied Experimentation Lead at NRG Energy (Fortune 150). Creator of the PRISM Method. Learn more at atticusli.com._

A
Atticus LiApplied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method
7 min read

Editorial disclosure

This article lives on the canonical GrowthLayer blog path for indexing consistency. Review rules, sourcing rules, and update rules are documented in our editorial policy and methodology.

Fortune 150 experimentation lead100+ experiments / yearCreator of the PRISM Method
A/B TestingExperimentation StrategyStatistical MethodsCRO MethodologyExperimentation at Scale

_By Atticus Li -- Applied Experimentation Lead at NRG Energy (Fortune 150). Creator of the PRISM Method. Learn more at atticusli.com._

---

Most SaaS user segmentation is useless.

Most of what I see -- in decks, in dashboards, in research reports produced by growth and CX teams -- is demographic segmentation. Company size. Industry. Geography. Job title. These feel like segments because the users are visibly different from each other. They rarely are segments in any way that matters, because the groups do not actually behave differently in the product.

Segmentation is not about grouping users by attributes. Segmentation is about finding groups of users whose behavior diverges in ways that should change your product, pricing, onboarding, or experimentation strategy.

The research and practice I trust on this -- Clayton Christensen's Jobs-to-be-Done work, Bob Moesta's interview methodology, Reforge's segmentation and retention content, Amplitude's cohort analysis literature, Mixpanel's behavioral analytics writing -- all converge on one principle:

Behavioral segments beat demographic segments nearly every time. Segments that tie to activation, retention, expansion, or revenue behavior are the ones worth building product and experimentation strategy around. Everything else is slides for the board meeting.

This post is a working practitioner's guide to finding the segments that actually drive growth.

The Problem with Demographic Segmentation

Demographic attributes -- company size, industry, role, geography -- are easy to collect and easy to visualize. They are also usually poor predictors of in-product behavior.

A project management tool does not have a "marketing manager" segment and an "engineering manager" segment that behave meaningfully differently in the product. It has a "teams of 2-5 who collaborate asynchronously" segment and a "teams of 20+ who use it for project portfolio oversight" segment. Those behave differently. Those demand different product decisions. And they cut across every demographic slice.

Demographic segmentation tells you who the users are. Behavioral segmentation tells you what they do and what they need. The second is what you can actually design around.

The exception: when you know a specific demographic strongly predicts a specific behavior. If your data shows that enterprise customers have 3x the retention of SMB customers, that is worth treating as a segment. The test is whether the demographic attribute predicts behavior that matters -- not whether the attribute exists.

The Four Segmentation Types That Actually Matter

1. Activation-Based Segmentation

Divide your new signups by whether they reached a defined activation moment in a defined window. Activated vs not-activated users are always two different populations. They look the same demographically. They behave nothing alike.

What to do with this segmentation:

  • Measure activation rate by acquisition channel, campaign, onboarding variant. The variance is often dramatic -- a 40% activation rate from one channel vs 15% from another is common in SaaS.
  • Identify which channels are delivering high-quality signups and which are inflating the top of funnel with users who will never activate.
  • Design differentiated follow-up. Non-activated users need reactivation, not feature promotion.

2. Retention-Based Segmentation (Power Users / Core Users / Casual Users)

The most valuable behavioral segmentation for most SaaS products is usage intensity. Define "core behavior" -- the specific action that correlates with long-term retention -- and segment users by frequency of that behavior.

  • Power users perform core behavior multiple times per week.
  • Core users perform it weekly.
  • Casual users perform it monthly.
  • Dormant users have signed up but rarely return.

Each segment needs a different product, communication, and pricing strategy. The product decisions that delight power users often confuse casual users. The onboarding changes that help casual users are invisible to power users.

Most SaaS product teams optimize for their median user. The better move is often to optimize for the specific segment that drives the economics -- typically the power user and core user segments, because they account for a disproportionate share of retention and expansion revenue.

3. Jobs-to-be-Done Segmentation

Christensen and Moesta's framework: users do not buy products, they hire products to do a specific job in their life. Two users in the same demographic can hire the same product for completely different jobs -- and the same user hires the same product for different jobs at different moments.

In practice, JTBD segmentation means identifying the 3-5 distinct jobs your product gets hired for and organizing your analysis around them. A note-taking app might be hired for:

  • Capturing meeting notes for later reference
  • Managing a knowledge base
  • Journaling for personal reflection
  • Drafting and editing long-form writing

These are four different products, shipped under one interface, used by different users at different moments. If your product analytics collapse all four into "users," you will make bad product decisions.

Moesta's interview methodology -- the "switch interview" -- is the best technique I know for surfacing JTBD segments. Talk to users who recently switched from a competitor or recently started paying. Understand the specific moment that triggered the change. Patterns emerge quickly.

4. Revenue / Expansion Segmentation

Users contribute unequally to revenue. Some cohorts expand. Some churn. Some sit flat. In most SaaS businesses, a small segment drives the majority of revenue growth through expansion, and a different small segment drives the majority of churn.

Revenue segmentation requires cohort analysis:

  • New MRR by acquisition cohort over time
  • Expansion MRR by cohort
  • Churn MRR by cohort
  • Net revenue retention (NRR) by segment

The segments worth building around are the ones with strong NRR. The segments worth fixing or deprioritizing are the ones with consistently negative NRR.

How to Find Your Segments (Starting from Scratch)

If your team has not done rigorous segmentation work, here is a starting sequence:

  1. Define your core behavior. What single action, if performed repeatedly, predicts long-term retention? This is the behavior to segment around. Run a cohort analysis comparing long-retained users to churned users to identify it.
  2. Run a cohort retention analysis. Cohorts by acquisition month, by channel, by plan. See which cohorts retain and which do not. Patterns will emerge.
  3. Overlay behavioral intensity. Split each cohort into power / core / casual / dormant based on the core behavior. Look at retention and expansion within each band.
  4. Conduct JTBD interviews. Ten to twenty switch interviews will surface the major jobs your product gets hired for. Code the interviews. Look for clustering.
  5. Test the segments in experimentation. Run your next experiments segmented by the candidate segments. If a segment responds differently to a test than another, the segmentation is real. If responses are indistinguishable, the segmentation is not informative.

The test of whether a segmentation is real: does it predict different responses to changes in the product? If it does, it is real. If it does not, it is a label.

Segmentation in Experimentation

Segmentation and experimentation compound. Some changes lift activation for casual users and hurt it for power users. Some retention interventions work beautifully for one JTBD segment and do nothing for the others. Aggregated test results mask these effects.

Best practice in segmentation-aware experimentation:

  • Pre-register the segments you will analyze. Do not discover segments after the fact and report the ones that flatter the hypothesis.
  • Use pre-registered segments, not post-hoc slicing. Post-hoc segment discovery is how teams produce false-positive lifts. The segments should be defined before the test runs.
  • Apply Bonferroni or similar corrections when you genuinely have multiple pre-registered segments. The multiple-comparison problem is real.
  • Do not conclude from segment-level evidence that is not statistically supported. "Mobile users seemed to respond better" is not a finding if the segment was not powered.

When done well, segment-aware testing produces a body of evidence that lets you ship different experiences to different segments -- with confidence that the differentiation is earned.

Common Segmentation Mistakes

  • Segmenting on everything, using nothing. Teams collect demographic attributes at signup and never use them. Segment only on attributes and behaviors you will actually design around.
  • Calling a list a segment. "Users who clicked on the pricing page last week" is a list. A segment is defined by a behavioral pattern that predicts something.
  • Ignoring segment size. A segment that is 0.5% of your base is not a segment worth optimizing for -- no matter how interesting its behavior is.
  • Discovering segments post-hoc and treating them as real. Post-hoc segmentation is storytelling, not analysis.
  • Over-personalizing before validating the segmentation. Shipping personalized experiences for unvalidated segments introduces complexity without known benefit.

A Framework for Segmentation Work

  1. Start with core behavior. What predicts retention? Segment on that.
  2. Layer cohort retention. Acquisition cohort × behavioral band.
  3. Run JTBD interviews to surface jobs. Ten to twenty switch interviews.
  4. Validate candidate segments through pre-registered experimentation. Segments that respond differently to tests are real segments.
  5. Feed validated segments back into product, pricing, and experimentation strategy.
  6. Prune. Retire segments that do not produce different product decisions. Segmentation complexity compounds; pay it down.

Segmentation Test Checklist

  • [ ] Core behavior defined based on retention cohort analysis
  • [ ] Activation-based segmentation in place (activated vs not)
  • [ ] Behavioral-intensity segments defined (power / core / casual / dormant)
  • [ ] JTBD segments surfaced from switch interviews where applicable
  • [ ] Revenue / expansion segments tracked via cohort NRR
  • [ ] Segments defined _before_ analyzing test results (pre-registered)
  • [ ] Multiple-comparison corrections applied when testing across segments
  • [ ] Segment sizes large enough for statistical power
  • [ ] Segments validated by differential response to tests, not assumed from demographics
  • [ ] Segments retired when they stop producing different product decisions

The Bottom Line

Segmentation is not a research deliverable. It is a decision framework.

If your segmentation work does not change what the product team builds, what the growth team tests, how pricing is structured, or how onboarding routes users, it is not real segmentation. It is a slide.

The segments that earn their keep are the ones where different groups respond differently to changes in the product -- validated through experimentation, not assumed from demographics. Start with core behavior. Layer in JTBD. Validate through tests. Prune ruthlessly.

If your team is running segmented experiments and losing track of which segments responded to which changes, that is the exact problem I built GrowthLayer to solve. But tool or no tool, the principle stands: segments are defined by behavior that predicts outcomes, not by attributes that flatter the deck.

---

_Atticus Li leads enterprise experimentation at NRG Energy and advises SaaS companies on behavioral segmentation and cohort analysis. Pre-registered segment analysis is a core component of his PRISM framework. Learn more at atticusli.com._

About the author

A
Atticus Li

Applied Experimentation Lead at NRG Energy (Fortune 150) · Creator of the PRISM Method

Atticus Li leads applied experimentation at NRG Energy (Fortune 150), where he and his team run more than 100 controlled experiments per year on customer-facing surfaces. He is the creator of the PRISM Method, a framework for high-velocity experimentation programs at large enterprises. He writes regularly about the statistical and operational details of A/B testing — the parts most CRO content skips.

Keep exploring