Research impact is qualitative

How do UX researchers quantify their impact? There are ways we can calculate a return-on-investment for our research activities. However, this is often not how a UX researcher’s performance is judged. Researchers are judged based on their impact narrative.

Of the three kinds of UX research impact1 (execution, influence, and outcome), researchers are most typically judged on influence impact. Influence itself doesn’t have any dollar amounts attached to it, though it can lead there through eventual outcome impact.

Outcome impact isn’t owned by researchers, but it’s nice to track when possible. It typically shows up as a metric (revenue per quarter, user growth year over year, etc.). Because outcome impact is most prized by businesses, this has shaped the conversation around research impact to be “metric” focused. Many frameworks focus on the quantification of research impact2. However, this thinking is flawed. UX research impact is not a ‘metric’ in the same sense that product teams measure business outcomes.

Please wait…

Thank you for signing up, stay tuned.

What is research impact? 

Functionally for individual contributors (ICs), impact is a narrative. Researchers use an impact narrative to achieve job security, a promotion, or career mobility. Because it’s a narrative, it’s inherently qualitative. Research projects, product decisions, and business metrics become research impact when they’re wrapped in a narrative fit to illustrate the value of a researcher and their work. We assign a cause and effect to the actors and systems involved with research outputs so we can show that our work is valuable.

The goal of an impact narrative is not quantitative. You may cite some numbers, especially if you are able to track some outcome impact. Even then, it’s more about the story than the number itself. 

“Quantifying” execution and influence impact

Most impact in an UX researcher’s impact narrative is influence impact, ideally3. While you could quantify some aspects of this, it doesn’t really make sense to do so. A quarter where you influenced three roadmaps is not inherently better than one where you influenced six. It depends on the magnitude and strategic value of the changes in the roadmap (were they big or small?). If you made minor changes to the UI across three product pillars, that is not as significant as sparking a new line of business, even though the latter may be quantitatively smaller in the number of roadmap documents where it’s cited. Quantifying influence impact conflates frequency with magnitude or importance.

The same clearly goes for execution impact. If one researcher ran ten rapid research studies that discovered small usability issues in one product pillar, that does not mean they have more impact than a researcher who ran one generative study that reshaped how the company segments its users. Both of these kinds of research are important and necessary, but attempting to quantitatively compare them shows the apples-to-oranges comparison problem we run into.

“Quantifying” outcome impact

Let’s try something to see why quantifying our execution and influence impact doesn’t make sense. Imagine we quantified our outcome impact in the same way. Say you’re lucky enough to support two product efforts where you can track outcome impact. In one, revenue goes up by 2% after the launch. In the other, your feature launch increases user retention by 3%. If we’ve “quantified” our outcome impact like we do to execution or influence impact, we’d say “I improved 2 metrics this quarter” or even worse ‘I improved metrics at 2.5% on average”. Context is completely lost when we ignore the qualitative nuances of these metrics.

Even outcome metrics are not entirely quantitative in our impact narrative. If a company is in a growth phase, user engagement increases may be more valuable than profit increases. The only quantitative evaluation you can really make to compare impacts are those on the same metric in the outcome phase. For example, if one project led to 3% more daily active users and another led to 6% more daily active users within a roughly similar time period, project two is more valuable. However, even this gets murky if you need to consider how much your work impacted the projects overall (impact diffusion). So in the end, we’re really back at storytelling, even in outcome metrics.

Where is quantification helpful?

Quantification is not entirely useless, especially when looking outside the context of an IC’s research impact narrative4. It could provide some diagnostic view for teams from a management perspective. For example, if the number of research share outs decreases over time, this could mean there is a lack of stakeholder connection or visibility. If one researcher has poor performance reviews but is executing a similar number of projects, then it indicates a breakage problem rather than a research volume/pace concern. These macro views may help manage a team, but aren’t the way most IC researchers will need to structure their impact.

Another viewpoint where quantification of execution or influence may be useful is from a research ops perspective. In that domain, the execution of effective working processes is the goal. Things like the number of insights added to a repository or number of sessions executed per month are the outcomes a research ops person owns (among many others). This makes sense because the goal of the role of an ops person is different from that of a researcher.

Internal impact vs. resumes

This post is focused on writing an impact narrative internally for a manager or promotion committee. It can be a little different if you’re writing out bullets for a resume or a case study in a job interview. Here, you may see some benefit of counting the number of projects or roadmaps impacted to quickly showcase the scope and quantity of your work. I’m a firm believer in prioritizing quality over quantity of projects. However, hiring managers may want a high level sense of the cadence someone can operate at in the field. Still, this quantification would only be in addition to qualitative impact narrative elements. It’s ultimately important to focus on your influence as a researcher primarily.

Wrap up

When you’re an IC thinking about impact, I don’t recommend attempting to quantify the elements of your impact. It doesn’t drive a stronger narrative about the value of your work. Apples-to-oranges comparisons of execution and influence impact make the process confusing at best. As you develop a research impact narrative, focus on the quality of how you did projects, what influence they led to, and how that impacted that team.

Stay tuned for for another article on how I approach developing a research impact narrative tactically. It will be published with Condens, but I’ll still send it via my newsletter (sign up below).

Skip the algorithm, get my new posts right to your inbox.

Please wait…

Thank you for signing up, stay tuned.

  1. I defined a framework for research impact focused on what individual contributor researchers can control and own. It involves three components:
    Execution impact: How researchers begin, execute, and complete their projects (and support their team).
    Influence impact: How researchers cause change in other actors’ decisions.
    Outcome impact: How other actors measure success in actions influenced by researchers. ↩︎
  2. See my full list of references. In particular, the following authors emphasize quantification of impact: den Bouwmeester, 2024; Shukla, 2021; Dombrowski & Whitman, 2024; Pryor, 2024. ↩︎
  3. This isn’t always true. Consultants, for example, tends to be graded on their deliverable since that is the final thing they hand off to a client. This would mean execution impact is the main form of impact. It gets a bit more complex here because they are also probably graded on metrics like billable hours or revenue generated, but these don’t directly relate to a product being researched. ↩︎
  4. I’m writing deeply from an IC perspective – it’s what I’ve done for my whole career so far. Even in roles where I lead other ICs on projects and shape stakeholder team collaboration, I’m not in calibration discussions during performance reviews. It’s possible I am missing something from the manager’s perspective. If you disagree (or agree) as a manager, I’d love to hear from you (or write your own piece about it!).  ↩︎