Guide
Churn analysis, methods, metrics & how to find the real reason
How to actually understand why customers leave, from cohort analysis to AI-moderated cancel interviews. The metrics, the methods, and the why behind the number.
Churn analysis is the practice of measuring how many customers leave and understanding why they leave. The rate is the easy half. The reason is the half that decides whether you can do anything about it.
Most retention dashboards stop at the rate. A number that ticks up or down each month, with no explanation attached. That is a thermometer, not a diagnosis. You cannot prescribe treatment from a temperature reading.
The output of good churn analysis is not a lower number on a dashboard. It is a clear, evidence-backed sentence about why customers leave, specific enough that a PM, a marketer, or a CSM can act on it next week.
The two questions every team must answer#
Every churn conversation collapses into two questions. Treat them as separate workstreams or you will keep confusing one for the other.
- How much:
The quantitative picture. Logo churn, MRR churn, gross and net retention, cohort decay curves, segmented by plan and persona. This is the dashboard layer.
- Why:
The qualitative picture. Pricing, fit, onboarding friction, missing features, competitor pull, life events. This is the conversation layer, and it is where most teams have nothing.
Metrics without reasons produce panic. Reasons without metrics produce guesswork. You need both, running in parallel, feeding each other.
Quantitative methods that actually segment the problem#
Aggregate churn is a lie. The number averages across plans, cohorts, acquisition sources, and personas, and the average hides the segment that is actually bleeding. Start by breaking the number apart.
Cohort analysis
Group customers by signup month, track retention curves. Steep early drop = onboarding issue. Slow flat bleed = long-term value issue. If March churns twice as fast as January, find what changed.
MRR vs logo churn
Logo churn counts customers leaving; MRR churn counts revenue leaving. High logo + low MRR = shedding small accounts. Low logo + high MRR = whales walking out. Track both.
Gross vs net retention
Gross retention = what you keep before expansion. Net adds upsell. A 110% net can hide 70% gross when a few accounts expand fast. Net flatters; gross tells you if the product is sticky.
Segmentation
Slice churn by plan, source, company size, feature usage. The segments at 2-3x the average are where the story lives. A free-tier 80% drop and an enterprise 8% drop are different problems.
This is the work that tells you who is leaving. It cannot tell you why. For that you have to talk to people.
Why static exit surveys give you garbage data#
Most teams reach for a static exit survey, get useless data back, and stop. The pattern is predictable.
A churning user gets a dropdown with five options: too expensive, missing features, switched to a competitor, no longer needed, other. They pick one in two seconds and leave. You count the picks and announce that 38% of churn is "too expensive."
That number is worse than no number. "Too expensive" almost never means the price is too high in absolute terms. It means the user did not get enough value to justify the price they already agreed to. That is a product problem, an onboarding problem, or an ICP problem, dressed up as a pricing problem. If you respond by cutting the price, you make the unit economics worse without fixing the underlying issue.
"Missing features" rarely names the actual feature. "Switched to a competitor" rarely names the competitor or the trigger. "Other" is where the real signal hides, in a free-text box that nobody reads.
Static exit surveys give you categories. You need causes. For more on why one-shot surveys underperform, see the exit survey guide and the broader voice of customer framework.
Methods that actually surface the why#
There are four methods that produce explainable, segmentable answers about why customers leave. Run them together.
Cancel interviews at the moment of cancellation
Talk to churning customers while the reason is fresh and specific. A 90-second conversation at the cancel button beats a 30-minute scheduled call a week later, because by then the user has rationalized the decision into a clean story. Five real conversations beat five hundred multiple-choice responses.
NPS-based at-risk segmentation
Detractors churn at three to five times the rate of promoters. Tag low-NPS responses as at-risk, route them to a follow-up conversation, and you get pre-churn signal weeks or months before the cancellation, while you can still act.
Behavioral signals from product telemetry
Login frequency dropping, feature adoption declining, fewer seats active, support tickets spiking. Build a churn-risk score from these signals and trigger CS outreach before the renewal date. By the time the cancel email arrives, the decision was made weeks ago.
CS ticket analysis
Your support inbox is an unread churn report. Tag tickets by theme: bugs, missing features, pricing complaints, integration gaps. Cross-reference the themes against accounts that later churned. The themes that correlate are your root causes.
These methods produce qualitative data. The mistake is to treat that data as anecdotal and unscalable. Tag the conversations by theme, count the themes, and you have quantitative output from qualitative input. That is the loop. See customer feedback loop for how to wire it into product decisions.
Saveable vs unsaveable churn#
Not all churn is bad churn. Treating it as one bucket wastes save-team budget and burns out CSMs chasing accounts that were never going to stay.
- Saveable churn:
Friction, missing features, pricing misalignment, poor onboarding, competitor pull from a fixable gap. These accounts can be saved, but usually by fixing the underlying product issue, not by handing out discounts.
- Unsaveable churn:
Company shutdowns, acquisitions, role changes, and most importantly, fundamental product fit. The user was never going to get value from your product. A 30% discount delays the cancellation by three months and produces a bitter customer at the end of it.
The framing shift that matters most: fit-related churn is a marketing and ICP problem, not a CS problem. If you keep losing the same customer profile for the same reason, the answer is to stop selling to that profile. Tightening the ICP at the top of the funnel does more for retention than any save offer at the bottom.
A related trap. Save offers are tempting because they show up as "saved revenue" in next month's report. But if the product is the reason they were leaving, the save is temporary. Spend the save-team budget on the product fix instead. The math compounds.
If you want to replace your exit-survey checkboxes with a real cancel interview that surfaces the saveable accounts, AI-moderated tools like Diaform can run a 90-second conversation at the cancel button. The output is structured summaries with sentiment, themes, and quotes, not a category count. See the cancellation survey flow for the setup.
One practical thought to close#
Pick one segment that is churning faster than the rest. Talk to ten of them, this month, with real conversations and not a dropdown. Whatever you learn will be worth more than another quarter of dashboard staring.