Rigor is the foundation of valuable UX research. It’s what allows organizations to make good decisions that help them to achieve their outcomes. The reality of doing UX research in the real world means that we have to do research that is not maximally rigorous because resources are limited (time, money, politics, etc.). The question then becomes, how much rigor is enough rigor?
I’m a strong advocate for the importance of rigor, but I also don’t think every project requires the maximum possible amount of rigor. There is one line researchers should not cross when developing their research: Minimum Viable Rigor (MVR).
Staying above the line of MVR means that you are confident enough that the insights will help stakeholders design their actions to achieve the intended consequences, given the risk tolerance of the situation. When your methods don’t meet MVR, then you will still produce an insight. However, that insight won’t effectively align stakeholders’ decisions close enough with reality. This leads to bad outcomes, and outcomes are the only thing stakeholders really care about.
MVR is a nuanced concept. Unlike a function such as engineering, where bad code gives you an error, you can get something resembling a useful insight with any method, whether or not it’s accurate. This is because the truth UX research seeks is fragile.
The fragility of truth
Rob Fitzpatrick said “Trying to learn from customer conversations is like excavating a delicate archaeological site. The truth is down there somewhere, but it’s fragile. While each blow with your shovel gets you closer to the truth, you’re liable to smash it into a million little pieces if you use too blunt an instrument.”. This goes far beyond customer conversations (what we might call interviews in UX research) – it applies to all science, but perhaps most acutely to social science methods.

My academic training is in psychology. Most of the study of psychology research methods is finding the precise instrument that lets you excavate the truth vase without smashing it. This applies to quantitative and qualitative research alike. We see this in one of the most common refrains in UX research from Erika Hall in Just Enough Research, “The first rule of user research: never ask anyone what they want”. This interview or survey question is the equivalent of using a backhoe to excavate the teacup from an archeological site. You’re going to dig up something, but it won’t be useful.
Rigor creates process of excavating the truth without smashing it. In my previous post on rigor, I formally defined rigor for UX research as when environmental variables are effectively uncovered to help shape decisions that lead to intended consequences.
Rigor in UX research is when environmental variables are effectively uncovered to help shape decisions that lead to intended consequences.
When describing rigor, I have also used the metaphor of clarity. Rigor is what we employ to make the lens from our limited view point to a useful reality less blurry. A good method removes the debris from the lens, leaving a clear view for us to make decisions. A bad method removes the debris at the expense of warping and misshaping the lens. What is left is a view of reality, but it’s not useful. You may want to choose a path because it looks short, but that is only because you’ve made the lens artificially elongate the alternative paths.
As applied researchers, our goal when assessing rigor is to make judgement calls about how well we can see through the lens, or if we need to discard it entirely as it will lead us astray1. This is the question we must answer when we choose which corners to cut in UX research. What is the MVR? Has my work or my plan crossed below the line? This “allegiance to a set of standards”2 of enough rigor is what makes a professional researcher the right person for the job, regardless of title.
Elements of MVR
It’s important to first clarify that the object or focus of MVR is an insight3. It’s not the statistical model itself, or the qualitative discussion guide, though these elements certainly play a crucial role in the final judgement of MVR. Ultimately, what we judge is the insight that is obtained from the method(s). MVR can be planned for but the real test is after the research execution, since things don’t always go according to plan.
In Minimum Viable Rigor, there are two real elements to break down: “Minimum Viable” and “Rigor”. I’ve already talked about rigor at length to define it, but there is a great amount of detail in other sources discussing threats to validity in quantitative and qualitative research. Threats to validity deteriorate rigor in research projects. These threats are often introduced by real constraints such as time, resources, expertise, etc.
Researchers interrogate their own work and work with teammates to evaluate these validity threats. Because of the applied context, some tradeoffs (often many tradeoffs) will be made in UX research plans to deliver insights within project constraints. The goal of UX research is not perfect truth, it’s to align close enough to a truth that the team’s actions achieve intended outcomes at an acceptable rate, given the risk tolerance of a decision to be made. This is when we try to understand what is minimally viable in our rigor.
The viability of a research insight does have a rock bottom floor, where foundations are so poor they cannot be used at all. Most insights are not at this rock bottom (yes, even NPS has some value). The way to begin to draw the line of MVR is by comparing the evaluation of rigor in an insight to the rigor necessary for a decision-making context.

There is no numerical formula for this — it’s a judgement made by assessing the rigor and the organizational needs. Still, the qualitative formula is: MVR = Rigor of Insight – Decision Risk. The rigor needs to be higher as the riskiness of the decision grows to stay above the line MVR.
The best way to precisely detect MVR is to deeply understanding research rigor and the decision-making context. Having a deep understanding of rigor is not a call or expectation to maximally apply rigor in each situation. Knowing rigor deeply gives the ability understand when to cut what corners. Having a deep understanding of the decision-making context means knowing the problem area’s scope and its potential influence on the organization’s goals. In the middle of both rigor and the decision risk, researchers explore the needed/available resources (time, money, personnel, technology, etc.) to reconfigure the weights until an optimal balance is found.
A misunderstanding I see in online discourse is that knowing how to achieve maximal rigor means that a researcher will always attempt to apply it. This is the case for UX researchers whose skills are not well-rounded, those lacking the emphasis on evaluating decision risk in MVR. Avoiding those with a strong sense of rigor as a rule is misguided. It’s the essential element in research, it simply must be balanced with a good sense of decision-making contexts.
Levels of decision risk
While mostly gray, there are some areas where MVR bottoms out, regardless of decision-making context: the rigor is so lightweight it can’t balance out any type of decision. In essence, you’re making a decision on luck, but it’s worse than knowingly choosing luck. The team is unfairly convicted of the insight, thinking it holds more weight than luck. New insights that are valid have to work hard to find their way in to the decision, if at all.
Certain decision-making contexts can tolerate more risk than others. This is industry-dependent, sometimes even codified when risks are high enough (like usability in medical devices, though this veers slightly more into human factors that UX research). In UX research, it’s typically dependent on each project given the product lifecycle stage, impact of the decision on users/revenue/compliance, etc. Researchers must weigh if the corners that have (inevitably) been cut are acceptable for the risk tolerance in the team’s decision.
It’s impossible to succinctly layout all of the elements here or anywhere that can give a decision tree for MVR4. To give a feel for what MVR is in practice, I will go over some hypothetical examples where insights do or don’t fall below the MVR line.
Examples of MVR
These examples dig into just a few ways insights can fall above or below MVR, even within the same study. These aren’t comprehensive, but peel back the curtain in my thought process.
Planning a study for a price comparison page
Let’s say a stakeholder of an e-commerce site for an app + hardware product wanted to get a sense of how easy to understand their price comparison page was. They wanted to benchmark a survey question that explicitly asked, “How easy or difficult is the information to understand on this page?”. They were insistent on this phrasing because of the trap of making your survey/interview question from your research question.
This is a red flag for rigor. It asks users to reflect on their own cognitive processes in real time, not their behavior or perceptions. Humans are very bad at this (think: can a lens take a picture of itself?).
Participants may give what is an honest answer, but it won’t be a reliable answer. How can a person know if the information is actually understood or not? If they didn’t understand it, would they differentiate that from feeling clear about something that was in fact misunderstood? It’s unlikely and impossible to be sure. This situation is hitting the very bottom of MVR – we simply can’t say the insight we would have is related to the core research question.
I would explain to stakeholders how we would get data, but that it wouldn’t be a reliable snapshot of how effective that page is at doing its job. We could then pivot to behavioral benchmark tasks, asking users to provide the objectively correct answers to concrete comparisons within the table. This may still take a while to convince the stakeholder, with them pushing back about wanting to know about the whole page, not just the given comparisons in the test scenarios. This is where patience and diplomacy comes in, reiterating how being more explicit about a few specific things will give us better information than asking generally about the research question we need to know.
Legally, I can’t say if this actually happened to me in a previous role 🙂.
An international survey
Let’s say a team ran a survey with 1st party users recruited in the US, Germany, and Mexico to learn about grocery shopping behaviors. You were brought in to analyze the data.
A Likert question about product satisfaction shows that users in Mexico are much more satisfied than other countries. If the insight is “users in Mexico are much more satisfied with online grocery delivery compared to other countries”, this would not meet my threshold for MVR. Users in Mexico (among other countries) are more likely to demonstrate acquiescence bias in Likert question responses than populations of the US or Germany. The boost we’re seeing in Mexico is almost certainly a result of this bias, not a meaningful difference in the underlying satisfaction.
To add more nuance, other questions in the survey involved emotionally neutral behavioral questions (Did you order groceries online in the past 7 days?) and ranking questions. Even in the same survey sample, I would trust these to compare across countries as they’re more robust to acquiescence bias. So in the end, part of the survey results would meet MVR and part would not.
Critical revenue flows
The examples above show mostly about the evaluation of rigor, but how does business risk weigh into MVR?5
Let’s imagine that you’re working to redesign the core advertisement creation flow for a company who makes $10+ billion per year from ads. The fix is quite small, the team wants to adjust the copy and location of sometimes-used setting related to the budget for an ad. This need is well-defined by some generative and evaluative research qualitatively. The impact potential is quite big, even just running an A/B test for a few days. A small percent drop in ad creation rate for the experimental group could mean the company loses millions of dollars.
In a start up, changing this flow might look like a quick 5 person usability test from end-to-end for research. However, that would not fly in a mature company generating this much revenue from the user flow. There is too much on the line that could be lost.
If I were in charge of this evaluative research, I’d use a quantitative usability benchmark, planning for the standard 95% confidence the company relies on for most decision-making. That plan alone of course is not sufficient. I’d need to choose effective usability tasks, crafting effective scenarios, use a well-suited and validated scale, and analyze the metrics data correctly.
Again, resources play into it even at this scale: I’d love 99% confidence intervals but going from 95% to 99% confidence intervals for completion rate (binary) data would mean going from about 400 participants to well over 4000 participants. Finding 400 advertisers to participate in a task-based benchmark may already heavily tax the research ops team for a fortune 100 company.
Running this benchmark would show us quantitatively that the flow would not be heavily impacted (before we test it live) or show us where and how users were getting stuck. It’s a heavy method but fit for purpose given the impact on the business and its users.
The answer to how you conduct any research, even a minor UI change, depends entirely on your decision-making context.
Talking to stakeholders about MVR
Knowing where the MVR line is in a project is essential for a researcher. Stakeholders are less likely to care about MVR for its own sake, so I don’t say to stakeholders exactly what I am thinking in my evaluation process.
When a plan for an insight hasn’t met the MVR, I know what it sounds like in my head. “Users aren’t able to predict their own actions”, “Preference testing may not be related to actual usage”, “Our sample size is too small to be sure about this”. If I were to bring this up directly, I’d assume my stakeholders would think, “Why are we talking about methods? I need to make a decision!”.
I like to re-frame this conversation based on our definition of rigor: “I want help our team make a decision that will lead to us meet our metrics. With this current plan, we wouldn’t meet a minimum viable level of rigor – this means the results won’t necessarily help us meet our team’s goals, in fact they may lead us in the wrong direction entirely”. The question of why may come up, but it’s invited by the curiosity of a stakeholder. This has a very different feel than when I lead with axioms of my own field that aren’t directly relevant to my stakeholder’s goals.
Disagreements about MVR
What if a stakeholder has a problem with your judgement of an insight’s or research plan’s MVR? Remember first, MVR is made of two things: an insight’s rigor and the decision risk tolerance.
It’s important to not let your stakeholders who are untrained in research influence your judgement of rigor — this is something I see far more often in junior researchers or researcher without solid foundations in rigor.
Humans are multidimensional in what skills they have, so it’s certainly possible a stakeholder may have some research perspective that is helpful (if they have experience/training). However, that perspective on rigor should not be driven by their desire for speed or political leanings. Speed and political leanings are important, but they don’t tangibly change an assessment of rigor.
Stakeholders certainly should influence your assessment of a decision’s risk. That is ultimately their expertise and ownership. They may be willing to accept more risk for a faster speed or political expediency, and that risk acceptance is their call. In this way, MVR is an exchange between a researcher’s assessment of rigor and a stakeholder’s assessment of acceptable risk.

This isn’t as black and white as I make it out to be, or as combative. These conversations are ideally collaborative, with each person bringing what they know best and truly listening to what the other person knows better than them. That said, expect discomfort and to tell stakeholders “no” at some points in your career.
Finding consensus
Every project has certain time constraints, resource constraints, and product-lifecycle constraints. In a discussion with a stakeholder during scoping, a researcher may have to say “no” to a certain research path. Ideally, this is conversation goes “no, and…” rather than ending the discussion. Even if (1) a certain research question from a stakeholder cannot be met given constraints, there are still some research questions that can be answered in most situations. Or perhaps (2), given a situation’s risk, more resources can be found for a problem.
When (1) a research question isn’t quite possible to answer as stated, I typically put forth what I could answer without more resources to see how useful they would be. For example, if they want usability metrics but only have a low fidelity prototype and small pool of B2B users, I may have to offer qualitative insights on what needs to be fixed in the information architecture without quantitative metrics.
When (2) a research question can’t be effectively answered and there could be more resources, I still introduce an alternative research question I’m able to answer with the resources at hand. Then, I push for additional resources if I or the stakeholder truly thinks the original research question is that critical. Perhaps we could find additional budget for vendor support or take another week to get the insights.
Consensus isn’t always possible. These conversations take place many times over quarters, halves, and years. Part of the soft-skill of a researcher is to make deposits into the bank of trust with stakeholders. That way, when the time comes where no consensus is possible, stakeholders are willing to trust in your point of view as a research expert6. If you feel like your aren’t able to find a way to make those deposits, my candid advice is to find stakeholders that are open to it (at the same company or a new one).
Finding your MVR
Rigor is the foundational craft of a researcher. Understanding business and stakeholder needs (decision risk tolerance) is nearly as important. Both are consistently required to be an excellent UX researcher. Stakeholders can easily find a reason to reduce the rigor of a research plan (because resources are always limited), so it falls on UX researchers to balance out the importance of finding an appropriate level of rigor.
Minimum Viable Rigor (MVR) is a simple conceptual framework to incorporate an insight’s rigor and the level of acceptable risk for a decision when developing and analyzing UX research projects. It is a starting point for intentionally planning good research and formulation for discussing the value of rigor with stakeholders who are not researchers.
Appendix
- One consideration is that any research insight that is as high quality as our best guess should be valid to use. I’d argue against this from a team culture standpoint. Any insight is likely to be given more weight than an non-validated idea, so an insight to be held to a higher standard than a guess. ↩︎
- From page 27 of Just Enough Research by Erika Hall. Her book provides excellent perspectives on how much research is enough, though it focuses primarily on qualitative approaches. ↩︎
- There are many ways to define an insight descriptively. Pragmatically, I am defining an insight as the high-level meaning of results from a given analysis process, typically shared with stakeholders. ↩︎
- I would like to highlight an effort by Rebecca Grier on Matching UX Research Rigor to Risk. It gives a simple rubric to match the skill level of a researcher (and their employable rigor) required for a given product risk context. While I don’t agree with everything in the article (“Rigor isn’t always needed” goes against the grain of my argument in this post), it’s a useful proposition towards quantifying when certain levels of rigor are required. ↩︎
- I’ve talked mostly about rigor and less about decision-making context. This is because UXR is often the gatekeeper of rigor. Every incentive in the decision-making side of the argument will push for faster, cheaper, and simpler (not knowing the pitfall of these arguments in relation to research quality). UXR, in my experience, rarely has to push for faster, cheaper, or simpler, but often pushes to maintain MVR. ↩︎
- My statement isn’t meant to say researchers are the only ones who can do research. However, you need to honestly look at everyone involved in the project and consider who has the training to make a rigorous judgement call on a decision for a research plan. ↩︎
I want to also shout out Erika Halls’ post on Minimum Viable Ethnography. This is a great post on a simple methodology to conduct ethnographic UX research. It does not define how this applies across different contexts outside of “No time, No money, No stomach for diverting energy away from building” and generative ethnographic research. The current post proposes a framework for comparing insight rigor with decision risk tolerance, which is quite different, even though the name shares some overlap.