I often get asked the question, what does a quantitative UX researcher actually do? There’s no single, correct answer. Rather than discussing educational backgrounds, tools, or methods as others have done before (1, 2, 3), this post outlines the thinking behind the types of projects I’ve typically done as a quantitative UX researcher (UXR).
A note on defining the role of quantitative UXR (updated):
I’ve noticed some archetypes emerge as I took on the title myself. At Meta, where I first got the job title of quantitative UXR, the role typically has a strong ownership over survey methodologies. At Google, the role typically has ownership over behavioral log data that is closer to what a data scientist may deal with, albeit with a focus on precise user actions and not on high-level business metrics.
This post previously only focused on survey-related work, which was the bulk of my focus at Meta. Since then I have taken on more projects purely based on log data. I updated this article to include one more project type with that focus (02/2026).
Of the work I’ve done as a quantitative UXR to help product teams or organizations, I’ve work on five main project types:
- Generalizing findings from qualitative research
- Uncovering user motivations and attitudes behind data science metrics
- Creating tracking surveys for business-critical user attitudes
- Measuring usability at a high level of confidence
- Understanding logs at a user experience grain
Let’s take a look at what each of these mean, in depth, and consider some hypothetical examples.
Generalizing qualitative research

Qualitative UXRs generate a huge amount of insights at rapid speed. UXR teams normally have far more qualitative UXRs compared to quantitative UXRs. For many product decisions, a team can confidently keep experimenting, designing, and moving ahead with just the qualitative data. However, when a business decision is extremely high risk or needs to compare across segments, a quantitative UXR can help validate qualitative findings at a bigger scale.
Examples
A qualitative UXR may find 20 customer needs through deep-dive interviews. Using a MaxDiff survey, a quantitative UXR could effectively prioritize all 20 needs in order of importance. The product team could then take this information to choose what features to develop first.
A qualitative UXR may find that some users in segment A have a certain need, but this wasn’t found as prominently in segment B through 6 total interviews. While qualitative research is invaluable to discovering those user needs, its strength is not sizing how those user needs differ across groups more broadly. Using a general attitudinal survey, a quantitative UXR could find out if users in segment B also have this user need and if it’s more or less prominent than those users in segment A.
Finding the why behind data science

Data science (DS) is often at the forefront of information-gathering roles in an organization. A DS can effectively see the entire landscape of behavioral usage in a product. However, sometimes knowing what every user did isn’t enough, and a team needs to know why users did it. A quantitative UXR could run an intercept survey across segments to map user motivations to behavioral groups.
Example
A DS may find that users are not engaging with a new feature. A quant UXR could uncover why the new feature is going unused through a precise survey. This survey might appear right as a user discovers the features or navigates to it and contain one multiple-choice question. Is the feature not useful? Is it too hard to use? Are users unaware it exists? This survey could explain what the most common reason is. Then, the product team could take the right action to improve upon the lack of engagement.
Tracking user attitudes

An organization often wants to know user attitudes at a high level. Do users like the product? Does the product meet the mission an organization has set to accomplish for their users? Is the product easy to use over time? This is where a quantitative UXR is firmly in the center of their skill set. They can use advanced survey methods to accurately capture users’ attitudes over time, such as survey response weighting or nested statistical models.
Example
A social media organization may have a mission to let users have fun and connect with their friends/family. Top executives could want to know how successful the company is with that mission. A quantitative UXR could set up tracking surveys to measure if the users of this product feel like they have fun in the product and if it creates a sense of connection with friends and family. The organization could then understand how they’re meeting their mission. They could also dig deeper across certain geographic segments, new/tenured users, or how external events affect users’ perceptions of the organization’s products. Some organizations set up a simple CSAT or NPS, but a quantitative UXR can make something even more tailored and relevant to the goals of the organization.
Measuring usability

This may be my bias from my human factors background showing, but usability is something that UX research has a unique ability to uncover and influence improvements from. Most organizations deploy usability research in a qualitative setting (n=5, right?). However, with specialized tools (UserZoom, MUIQ, etc), a quantitative UXR can easily scale up usability investigations to a large sample when the situation calls for it. These situations may be strategic usability tracking/improvements or tactical tie-breakers.
Examples
A team may want to understand how their product’s usability compares to key competitors. A quantitative UXR could develop a competitive usability benchmarking study. This would show strategically where their product falls short of a competitor and what tasks need to be fixed most urgently. Qualitative testing alone could not measure the difference in usability across competitors or rank tasks by urgency.
A team may want to launch a new navigation menu, but needs to settle on a design variant before building the final product with valuable engineering time. A qualitative study can’t confidently declare a winner based on behavioral metrics. A quantitative UXR could set up a usability test in a clickable prototype and measure completion rates across versions to declare a winner. With even less time, they could set up a tree test just using the names of each menu navigation item and declare a design variant as the winner from the information architecture.
Log data at a user experience grain

DS as a function works almost entirely within log data generated through software’s telemetry. Quantitative UXR still has an additive role to play in this area. DS often focuses on high level goals and business metrics. These are critical but tend to only view user behaviors through aggregated measures (users that create accounts tend to have 75% longer sessions or the platform has 10 million active users). Quantitative UXR is not normally tasked with business-level measurement, so they can focus on the user experience grain, or “what does an instance of real usage actually look like?”. One common method would be sequence analysis (modeling the ordered events of user behavior). Advanced models require R or Python, but one can discover a lot by learning some basic SQL.
Examples
As DS may measure the average number of videos users streams per week for a streaming app. This could be how the business determines product success. They may even dive deeper by seeing what experiments raise this metric or how other feature usage impacts it. An adjacent quantitative UXR might look at the number of clicks user segments take before settling on a stream. The latter isn’t something the business wants to measure over time, but it might be more clearly informative on what the team needs to improve or understanding how people actually experience the app. One insight might be that users with long streams need to browse at least 3 categories on average or that 50% of user sessions select the most recently streamed show.
Wrapping up
The five projects above showcase the type of work I’ve personally done across organizations. There are numerous other areas a quantitative UXR can show their expertise, to name a few:
- Develop a survey scale for repeated use (i.e.: classic psychometric processes).
- Use machine learning to classify user-generated content.
- Run a sentiment analysis to discover what your app store reviews are saying about users’ experiences with your product.
- Run A/B experiments when there is no DS resource.
It’s hard to put quantitative UX research in a box. Take all of these key areas as a jumping off point, rather than an attempt to exhaustively define quantitative UX researchers’ job role.
Since writing the first version of this article, the market for UX research in general has suffered a downturn. Still, I see more teams looking to hire quantitative UX research roles now than I did five years ago. As a disciple, it’s matured significantly, despite economic challenges.
If you want to learn more right now about how to do the work, check out my post on self-study resources for quantitative UX research.
Sign up for my spam-free newsletter below or follow along on LinkedIn to see future posts from me.