Guide
NPS, what it measures, what it doesn't & how to use it
What NPS is, how it's calculated, why it's controversial, and how to use the score as the start of a conversation rather than the end.
Net Promoter Score (NPS) is a single-question loyalty metric introduced by Fred Reichheld and Bain & Company in 2003. It asks one thing: how likely are you to recommend this product to a friend or colleague, on a 0 to 10 scale.
NPS measures stated intent, not behavior. That distinction shapes everything you should and shouldn't do with the number.
How the score is calculated#
The mechanics are intentionally simple, which is part of why the metric spread so fast.
The question is fixed: "How likely are you to recommend [product] to a friend or colleague?" Respondents answer on a 0 to 10 scale. Scores are then bucketed:
- Promoters (9 to 10):
Loyal enthusiasts who, in theory, fuel growth through word of mouth.
- Passives (7 to 8):
Satisfied but unenthusiastic. Vulnerable to competitive offers.
- Detractors (0 to 6):
Unhappy customers who can damage the brand through negative word of mouth.
The formula: NPS = % Promoters minus % Detractors. Passives count toward the response base but drop out of the calculation. The result lands somewhere between -100 and +100.
A few mechanics worth knowing. The 0 to 6 bucket is wide on purpose, but it means a 6 (often interpreted by respondents as "fine") is treated identically to a 0 ("furious"). Cultural rating norms vary: a 7 from a Northern European is often a 9 from an American. And at low response volumes, the confidence interval around the score is much wider than people typically report.
What NPS is good for#
NPS earns its place in three specific ways.
It gives you a single trackable metric. One number, one chart, one line going up or down. For organizations that need a shared satisfaction signal across product, support, and marketing, that simplicity is genuinely valuable.
It's exec-friendly. Boards, investors, and leadership teams already understand NPS. It travels well in slide decks and quarterly reviews without requiring a stats primer.
It's comparable over time, against yourself. The most useful NPS comparison is your own product, this quarter versus last quarter. Did the score move after the redesign? After the pricing change? After you restructured onboarding? Internal trend lines are where NPS pays off.
It also works as a trigger. A drop in the score is a signal to investigate, to read open-text answers, to start a voice-of-customer project, to dig into churn analysis. As an alarm, it does the job.
What NPS isn't good for#
The metric gets asked to do work it was never designed for. Three honest limitations.
Predicting churn on its own. Detractor scores correlate with churn risk in some segments, but the correlation is too weak to act on without context. Plenty of Detractors stay because switching costs are high. Plenty of Promoters leave when a competitor undercuts on price.
Cross-industry comparison. A SaaS NPS of 40 and a telecom NPS of 40 are not the same thing. Category expectations, regional rating norms, survey delivery method, and sample composition all skew the number. Published industry benchmarks are interesting context, not targets.
B2B with multiple stakeholders. When the economic buyer, the IT admin, and the daily power user are three different people with three different relationships to the product, a single account-level NPS is close to meaningless. You need role-segmented feedback, not one aggregated number per logo.
Why the open-text follow-up is the actual asset#
The score gives you direction. The follow-up answer gives you action.
A 3 from a Detractor who writes "the CSV export times out on anything over 50,000 rows" is a product ticket. A 10 from a Promoter who writes "my whole team standardized on this after the Slack integration shipped" tells you which feature to keep investing in. The number alone tells you neither.
The standard follow-up is "What's the main reason for your score?" That's the floor, not the ceiling. Stronger follow-ups are bucket-aware:
- For Detractors: "What's the single change that would have made this a better experience?"
- For Passives: "What would have made this a 9 or 10 for you?"
- For Promoters: "Which specific part of the product would you tell a colleague about first?"
Most NPS programs collect these comments and never read them at scale. They sit in a spreadsheet column. The score gets reported in the QBR; the why gets lost. That gap is where the metric quietly fails to deliver, and it's also the easiest thing to fix. If your open-text NPS responses are mostly empty or one-word, AI follow-up tools like Diaform can ask a tailored 90-second follow-up after each score, so the score becomes the start of a conversation instead of the end. The same approach extends naturally into churn surveys, where the depth of the answer matters more than the rating itself.
Transactional vs relational NPS#
There are two common deployment patterns, and they answer different questions.
Relational NPS
Sent on a fixed cadence (typically quarterly or biannually) to your full active customer base. Measures overall sentiment about the relationship. Best for trend-tracking, board reporting, and year-over-year comparisons. Lower response rates, broader signal.
Transactional NPS
Triggered by a specific event: a support ticket close, an onboarding milestone hit, a renewal completed, a feature first used. Measures sentiment about that moment. Best for finding friction in specific journeys. Higher response rates, narrower signal, much more actionable for product and support teams.
How to choose. If you only have bandwidth for one, pick the one tied to the decisions you actually make. Product and support teams get more from transactional NPS, because the feedback maps to a fixable moment. Leadership and investor reporting tend to want relational NPS, because it tracks the overall relationship over time. Mature programs run both, with transactional feeding the customer feedback loop and relational feeding the dashboard.
Common mistakes#
A few patterns that quietly hollow out NPS programs.
Chasing the score, not the why. The moment NPS becomes a team KPI tied to comp, it gets gamed. Surveys get sent only to recently-happy users. Detractors get quietly excluded from the sample. The number rises; nothing actually improves.
Comparing against industry benchmarks too literally. Methodology, sample, region, and survey channel vary enormously across published benchmarks. "Industry average is 32" tells you almost nothing about your own product's health. Compare yourself to yourself.
Surveying too often. NPS fatigue is real. If a customer sees the same survey every month, response quality collapses well before response rate does. Quarterly relational plus event-based transactional is plenty for most teams.
One closing thought#
NPS is a thermometer, not a diagnosis. Treated as a single number to chase, it misleads. Treated as a routing mechanism that points you toward the right respondents to actually talk to, it earns its keep. The score tells you where to look. The follow-up tells you what to do.