Learn

Guide

User research, methods, frameworks & how to start

A practical guide to user research methods, when to use each, and how to build a continuous-discovery practice that actually drives decisions.

User research is the structured study of the people who use, or could use, a product. Its job is to replace internal opinion with external evidence before money is spent building the wrong thing.

What user research covers, and what it doesn't#

User research answers questions about behaviour, motivation, and friction. Why does a trial user churn on day two. What language do buyers use when they describe the problem. Where does the checkout flow break for first-time users. These questions sit upstream of design and engineering decisions.

It is not market research. Market sizing, competitor benchmarks, and pricing elasticity belong to a different discipline with different methods. It is also not product analytics. Analytics describes what happened inside the product. Research explains why it happened, and what would happen with a different design.

The line is worth holding because mixing them produces weak work. A survey that asks users to rank features is closer to market research than user research, and it tends to predict the future poorly. Erika Hall makes this point well in Just Enough Research: the value of the discipline collapses when teams skip the rigour and treat research as a vote.

The toolkit#

There is no master method. Each tool answers a narrow class of question, and competent teams pick based on the question, not the habit.

๐ŸŽ™๏ธ

User interviews

One-to-one conversations about real, recent behaviour. The backbone of qualitative work. Strong on motivation, context, and language. See user interviews.

๐Ÿ“Š

Surveys

Structured questions sent to many people. Strong on counts and comparisons across segments. Weak when the team does not yet know what to ask.

๐Ÿงช

Usability testing

Watching people attempt real tasks in a product or prototype. Reveals friction, broken mental models, and dead ends. Detail in usability testing.

๐Ÿ““

Diary studies

Participants log behaviour over days or weeks. Useful for habits, longitudinal change, and contexts a researcher cannot observe directly.

๐Ÿ’ก

Concept testing

Showing an early idea, message, or mock to gauge reaction and comprehension before committing to build. See concept testing.

๐Ÿ“ˆ

Analytics review

Quantitative behavioural data from inside the product. Pairs with interviews: analytics shows the cliff, interviews explain why people jump.

Two specific frames are worth naming. Jobs to be done interviews focus on the moment a customer decided to switch, which is unusually good at exposing the real trigger behind a purchase. Indi Young's listening sessions, by contrast, ignore the product entirely and map how a person thinks about a problem space. Both are interviews, but the question they answer is different.

Generative versus evaluative#

Every study sits somewhere between two intents.

Generative research explores. The team does not yet know what the problem is, who feels it, or how it shows up. Open interviews, diary studies, contextual inquiry, and JTBD switch interviews fit here.

Evaluative research judges. A design, prototype, message, or feature exists, and the question is whether it works. Usability tests, concept tests, A/B tests, and structured surveys fit here.

The most common failure mode is using an evaluative method to answer a generative question. A usability test on an existing flow tells the team where the buttons confuse people. It does not tell the team whether the underlying job is worth doing at all. Match the method to the question.

Qualitative versus quantitative#

Qualitative methods produce stories, language, and reasons. Quantitative methods produce counts, rates, and statistical confidence. Neither is superior; they answer different questions, and serious teams run them in sequence.

A working rule: qualitative tells you what to measure, quantitative tells you whether it moved. Five interviews will surface a hypothesis about why trial users churn. A survey or analytics cohort will tell you how many users that hypothesis applies to. Reverse the order and you end up with statistically significant answers to questions nobody should have asked.

Full breakdown of the trade-offs in qualitative vs quantitative research.

How to start a practice in a small team#

Most small teams do not need a research function. They need a research habit. Teresa Torres calls the version of this habit that works inside product teams continuous discovery: the product trio talks to customers every week, in small batches, while building. The point is not interview volume. It is that decisions stop being one quarterly bet and become a steady stream of small evidence-backed choices.

Pick one question

Write the single thing the team most needs to know this month. "Why do trial users not return on day two" beats "learn about our users". Specific enough that two people would design similar studies for it.

Choose the method that fits

A "why" question wants interviews. A "which works better" question wants a test. A "how often" question wants analytics or a survey. Resist using whatever tool is closest at hand.

Recruit a steady supply

The bottleneck in research is almost always recruiting. Pick one source: existing users, waitlist, in-app intercept, or a paid panel. Five to eight participants per segment is enough to see patterns in qualitative work.

Protect a calendar slot

Block a recurring window for sessions and a second one for synthesis. Without protected time the practice dies inside a month.

Share findings where decisions happen

A one-page summary with three insights and three recommendations, posted in the channel where the team plans, beats a forty-slide deck nobody opens.

The hardest part is not learning the methods. It is keeping the rhythm. A team that does five mediocre interviews every week for a year will outperform a team that runs one polished study per quarter, because the first team's mental model of its users stays current.

AI-moderated interview tools like Diaform can scale qualitative research without booking calls, which removes the most common excuse for letting the rhythm slip. The deeper mechanics are covered in AI-moderated interviews.

The first study a team runs is rarely the best one. The value comes from running the second, the third, and the tenth. Treat the first as practice, not as proof.

A useful closing thought: research is not a phase that ends when the product ships. The users keep changing, the market keeps moving, and the only teams that stay calibrated are the ones that keep listening on a schedule.

Ready to upgrade your feedback loop?

Stop guessing why users leave. Start an automated interviewer in seconds and get the deep insights of a Zoom call at the scale of a survey.

14-day free trial ยท No demo required