Guide
User interviews, definition, methods & best practices
What user interviews are, when to run them, and the methods top product teams use to learn fast, without burning out the team or the customer.
A user interview is a structured one-on-one conversation with someone who uses your product, used to use it, or fits the profile of someone who could. The point is to understand how that person thinks, works, and makes decisions in the context where your product lives.
What user interviews are, and what they aren't#
User interviews sit inside the broader practice of user research. They are qualitative. You are after depth, not statistical confidence. A typical session runs 30 to 60 minutes, follows a loose script, and produces a recording, a transcript, and a handful of notes you can act on.
It helps to be clear about what an interview is not.
It is not a sales call. The moment you start describing your roadmap, the user shifts into customer mode and starts being polite. You stop learning.
It is not a focus group. Group dynamics produce consensus, not insight. One loud participant pulls the rest of the room toward a single answer.
It is not a survey read aloud. If your script is twenty closed questions in fixed order, send a survey. Interviews earn their cost by following up on what the user actually says.
It is not a usability test, though the two are often confused. Erika Hall makes this distinction well in Just Enough Research: an interview asks about the user's life and work. A usability test asks the user to perform a task while you watch. Different goals, different scripts, different outputs.
When to run interviews instead of something else#
Interviews are the right tool when you need to understand reasoning, context, or behavior that happens off-screen. They are the wrong tool when you need a number.
Reach for interviews when you are trying to:
- Understand a problem space before you have a solution. This is customer discovery work.
- Map the steps and emotions around a recent purchase or switch, which is the core move in jobs-to-be-done research.
- Figure out why a metric moved when analytics can show the what but not the why.
- Pressure-test an assumption your team keeps repeating without evidence.
- Talk to users who churned, because they are invisible in your product data.
Reach for something else when you need to know how many people feel a certain way (survey), whether a layout works (usability test), or whether a price point converts (live pricing test). Stated intent in an interview is a poor predictor of actual purchase behavior, so do not ask "would you pay for this" and treat the answer as data.
How to structure a good interview#
A good interview has three parts: a warm-up, a core, and a wind-down. The structure matters because trust takes about five minutes to build, and the best material almost always comes after that warm-up window closes.
Open with context, not a pitch
Spend two minutes explaining what you are doing and what you are not. Make it explicit: you did not build the thing, there are no wrong answers, you want their honest experience. Ask permission to record.
Start broad, then narrow
Begin with the user's role and recent work. "Walk me through your last week" gives you a map of where your product sits in their day. From there, narrow toward the specific behavior or decision you care about.
Anchor questions in real events
Ask about the last time something happened, not how it usually goes. "Tell me about the last time you onboarded a new client" beats "How do you usually onboard clients" by a wide margin. Memory of specific events is more accurate than self-reported averages.
Ask, then stop talking
Count to three after each question before saying anything else. Most people fill the silence with a more honest answer than the first one. Interviewers who jump in get the polite version.
Follow the why
"Tell me more about that." "What were you trying to do?" "What happened next?" The follow-up is where the insight lives. Never accept the first abstract answer. Pull it back to a concrete recent example.
Close with a thank you and an opening
Ask if they would be open to a follow-up. The relationship is worth more than the transcript, especially if you are talking to a niche segment.
A useful rule of thumb from Steve Krug's usability work, which transfers cleanly here: if you are talking more than 20 percent of the session, you ran a presentation, not an interview.
Common mistakes that ruin the data#
Most bad interviews fail in the same handful of ways. They are easy to spot in a recording, harder to catch in the moment.
Leading questions. "Don't you think it would be useful if we added X?" is a pitch with a question mark. Replace with "How do you currently handle that situation?" and let the user describe the gap, or fail to.
Hypothetical questions. "Would you use a feature that does Y?" gets you fiction. People are bad at predicting their own behavior. Ask about the past instead. What they did last month is real. What they say they would do next month is a guess.
Talking to the wrong people. Recruitment is the single biggest determinant of insight quality. Ten interviews with the right segment beat fifty with anyone who showed up. Define the segment in writing (role, recent behavior, lifecycle stage) before you start scheduling.
Confirming what you already believe. If your script is built to validate a feature you already plan to ship, you will find validation. Write the script before the demo, not after, and include questions whose answers could change your mind.
Skipping the synthesis. A folder of recordings is not research. The work of pulling out themes, quotes, and decisions is where interviews actually produce value. Block time for it before you book the first session.
Treat the recording as raw material, not the finished product. If you do not have time to synthesize, you do not have time to interview.
How many interviews you actually need#
There is no universal number, but there is a useful range. Jakob Nielsen's classic finding for usability testing is that around five users per round surface the majority of issues, because the same friction points repeat quickly. For generative discovery in a new space, plan for closer to 10 to 15 before themes stabilize.
The signal that you are done is repetition. When the third person says what the first two said in their own words, you are at saturation for that segment. If you are still hearing brand new things at interview number 12, you are probably interviewing more than one segment. Split the cohort and start counting again.
The bigger constraint is usually not how many you should run, but how many you can. Recruiting, scheduling, sitting through the call, transcribing, and synthesizing adds up to several hours per interview. That cost is why most teams interview far less often than they should, and why the work tends to cluster around launches instead of running continuously.
If you want to run dozens of interviews without scheduling them, AI-moderated interviews through tools like Diaform can run them asynchronously, ask contextual follow-ups, and return structured summaries while you focus on synthesis. The trade-off is real (no human warmth, no off-script tangent) but for high-volume rote interviewing (onboarding feedback, churn exits, post-purchase follow-up) the volume usually wins.
The teams that learn fastest are not the ones with the best scripts. They are the ones who talk to users this week, and again next week, and have a habit built around acting on what they hear.