Cognitive Survey Tests: Faster methods for wider adoption

A puzzle made of felt

TL;DR

  • Cognitive survey tests have many benefits that industry researchers do not take advantage of. As industry researchers, we can speed up the process to make the benefits of this kind of testing more accessible. 
  • An unmoderated approach can yield useful insights with little effort and typically 1-2 more days in a project timeline. Jump to the short way to hit the ground running with your cognitive survey test.

(Be sure to see part one in this series to determine when you should incorporate a cognitive test in your survey project. Hint, you probably should.)


Most researchers I know haven’t conducted a cognitive survey test. In academia, they’re left to specialized survey psychometricians. In industry, researchers typically don’t have time to consider running a cognitive survey test. I hadn’t conducted my first cognitive survey test until I was multiple years into my industry career. Despite the lack of adoption, the benefits of cognitive testing are immense.

The goal of this process is to reduce the ways respondents make errors when responding to questions, so your survey data has stronger validity. There are a number of ways that these errors can happen. Let’s start with re-summarizing the cognitive processes from part one that describe how a respondent answers a question. A respondent:

  • Interprets the question
  • Searches memory for relevant information
  • Evaluates and/or estimates their response
  • Provides information in the format requested

Unique errors happen at each phase in the cognitive processes. Cognitive testing uncovers these errors so a researcher can redesign the question to make the error less likely to happen. As Willis says, cognitive testing “ explicitly focuses on the cognitive processes that respondents use to answer survey questions; therefore, covert processes that are normally hidden, as well as overt, observable ones, are studied.”

There are short and long ways to cognitively test your survey, and I’ll cover both in this post.

The short way (unmoderated)

Image of a felt and paper hare

With an unmoderated approach, you can fit cognitive survey tests easily into your survey projects (I’ve done it within 8 hours). To start, you need access to an unmoderated user testing tool (UserTesting, Userlytics, etc.). It also helps if you have a rapid research process for recruitment (this could be a serviced panel like UserTesting or an internally maintained panel). 

Setting up the unmoderated cognitive survey test

Here’s how to set up an unmoderated cognitive test to be easily reused for all of your survey projects:

  • Get your survey instrument link ready to test.
    • If you have standard questions that have been tested, you can even remove those to focus feedback on the new elements.
  • Create a test in your unmoderated testing platform of choice.
    • The primary task link is simply your survey link.
  • Add the instructions for participants in the test.
    • For experienced panel participants, you may only need a high level overview that describes how to approach the survey
      • Example text: Focus on clarity of the questions more than the answer, Consider what words do not make sense, Consider what response options need to be added, etc.
    • For participants not well versed in giving usability feedback, you may choose to add text within the survey that explicitly prompts the user.
      • Example text: Do you know what the word “home feed” refers to?, What does the word “satisfaction” mean in this context?, Did this multiple choice selection contain a response you identified with?, etc.
  • Specify your recruitment sample.
    • Sample size: 4-5 users is often enough to discover major problems. If you have segments, consider adding more users and lowering the number within each segment (2 segments, 4-5 from each segment; 3 segments, 3-3 from each segment; 4 segments, 2-3 from each segment; etc.).
  • Launch the test and analyze the feedback.
    • What questions are confusing? What words can you change to make them more interpretable?
    • Are the response options clear and exhaustive? What needs to be changed or added?
  • Iterate (if needed).
    • If you see little agreement or want to test your changes, you can simply run the test again to iterate. 

With this approach, you can make your survey significantly better and only add 1-2 days into your project timeline (depending on if you iterate). Once you run it the first time, the project file in your testing tool can be a template for future surveys. I find that I rarely need to change the prompts, so I can launch at a moment’s notice. 

There are some scenarios when you should consider the long way.

The long way (moderated)

Image of a felt and paper turtle

When to choose a moderated approach

The short way works in most scenarios, but here are a few reasons to consider the longer, moderated approach:

  • Needing every possible finding, for a maximally important project
    • The unmoderated approaches miss some details, but it’s still a worthy approach: unmoderated cognitive survey testing has been shown to yield similar results to moderated methods. That said, you’ll catch a few more outliers of information from moderated approaches.
  • Lack of access to unmoderated tools
    • Depending on the maturity of your org and research ops, you may not yet have access to unmoderated tooling.
  • Participants that cannot or will not access unmoderated testing tools
    • Certain population groups may not have access to technology to participate in an unmoderated test. B2B participants or small populations may need a higher touch approach to commit to testing.

Setting up the moderated cognitive survey test

Academic literature provides us with many great options to learn about cognitive survey tests. If you want to deep dive, check out these resources:

I highly recommend Willis’ short guide. While academic, it’s readable and approachable. Since that is 20+ pages, I’ll give you the even shorter guide to moderated cognitive testing to keep with our theme of speed and accessibility: 

  • Choose a think aloud protocol or verbal probe technique.
    • Learn about how to execute each style (in the linked guide or quick primer below).
  • Recruit your users like you would for a qualitative interview.
    • Sample size: 4-5 users is often enough to discover major problems. If you have segments, consider adding more users and lowering the number within each segment (2 segments, 4-5 from each segment; 3 segments, 3-3 from each segment; 4 segments, 2-3 from each segment; etc.).
  • Run the sessions, typically 30 to 60 minutes, depending the length of the survey.
    • Explain the goals of the session to participants, that you’re more curious about what’s clear or confusing about the questions, and not primarily what their answer is.
    • Use the think aloud or verbal probe techniques to understand respondent challenges.
  • Analyze the data to see what themes emerge.
    • What questions are confusing? What words can you change to make them more interpretable?
    • Are the response options clear and exhaustive? What needs to be changed or added?

When you have your survey instrument, you can approach the method with a think aloud protocol or a verbal probe technique. Both of these approaches involve a moderator, much like a structured or semi-structured qualitative interview.

Think aloud

In the think aloud protocol, the researcher reads the questions to the participant, and asks them to reflect their answer verbally. The researcher can ask them to “go on” or “explain further”, but rarely interjects otherwise. This is extremely similar to the usability technique by the same name.

Choose this method because:

  • The researcher is already familiar with think aloud protocols or there is little time for researcher training.
  • Your recruitment sample is likely verbose or talkative (common amongst standing members of feedback or research panels).

Verbal probe

In the verbal probe technique, the researcher asks the cognitive test questions once the participant has answered the question. The questions are often structured to specific components of the questions being tested. The cognitive questions can be concurrent (after each survey question) or retrospective (after the entire survey). I’d recommend the concurrent format, unless your survey is very short and you want to see how answers from one question influence a subsequent question.

Choose this method because:

  • The researcher has less moderation training and would benefit from a structured question guide (like a more junior researcher).
  • Your survey questions could encourage a great deal of irrelevant discussion (such as political or personal views), and you want to keep the discussion focused.

Either approach will make your survey much sharper than it would be without. 

Wrap up

Cognitive survey testing is an underutilized method in UX research. As researchers, we rarely find our creations to be in the hot seat like design or engineering does. It’s time for us to get over that fear and open our work for constructive feedback from our users.

With an unmoderated testing approach, there are few excuses to avoid cognitively testing your surveys. When that approach isn’t available, you should have moderated approaches in your toolkit to make sure your surveys are as effective as possible.

Citations

Geisen, E., & Bergstrom, J. R. (2017). Usability testing for survey research. Morgan Kaufmann.

Mockovak, W., & Kaplan, R. (2015). Comparing results from telephone reinterview with unmoderated, online cognitive interviewing. Proceedings of the American Association for Public Opinion Research Annual Conference.

Willis, G. B. (2004). Cognitive interviewing: A tool for improving questionnaire design. Sage Publications.

Willis, G. B. (1999). Cognitive Interviewing: A “How To” Guide. Meeting of the American Statistical Association.

All images generated using Gemini Advanced.