Guides
May 13, 2025
—
Robert Hess
Creating a Qualitative Research Guide for AI
How to design effective AI-driven qualitative studies with clear goals and structured prompts. For a more comprehensive overview of AI moderated research, check out our Ultimate Guide to AI-Moderated Research.
Table of Contents
Why You Need an AI-Specific Research Guide
What Makes a Good AI Research Guide
Interview Type Considerations
Best Practices
Conclusion & Next Steps
Why You Need a Thoughtful Research Guide
When you move from human moderators to an AI interviewer, you’re swapping from the nuance and intricacy of a person to person interaction to the scalability and broad usability of a machine. It’s like going from using a chisel to using a jackhammer—you’ll need to modify your approach when switching from one to the other.
AI Works Within Your Boundaries
AI naturally works within what’s appropriate for a work setting, but otherwise it needs context specific guidance. The research guide needs to have enough specific information for the AI to be smart on an organizational level. It also needs to have specific guidelines if you want to avoid potentially sensitive topics, but it can also be hindered by excessive direction. The AI is like a well-trained dog: always following your directions.
Consistency at Scale—For Good and Bad
One of AI’s biggest upsides is running multiple interviews simultaneously, day or night. It also has the superpower of being able to conduct the same interview, with the same energy over hundreds of participants—but that also means any flaw in your guide design replicates across every session. If you’re not checking and testing your guide before sending the interviews out, you may end up with insights you’re not interested in.
What Makes a Good AI Research Guide
Start with providing the context. AI doesn’t have all the context on your business. it’ll try to fill in the gaps but the more information you can provide it, the better it’ll be able to follow the threads you’re interested in and navigate the discussion outside of the narrow confines of your guide questions. Telling it a bit about what you’re studying and aiming to learn goes a long way to getting useful insights.
You might also want to consider your audience. While AI is good at speaking in many languages, it has room for improvement when it comes to regional slang and cultural nuance. If your study requires some kind of understanding of local cultural context, you’ll need to explain everything to the AI like a tourist.
Many people won’t be comfortable divulging certain types of information during an interview and requiring that information could form an impassable roadblock for the rest of your study. Take, for example, a website usability test that requires participants to create an account using a social security number as part of that process. If your study involves recording their screen, you may end up with most participants not getting through step 1. Check that your study doesn't throw up roadblocks for your ideal participants.
Depending on how specifically you want your interview to flow, you may include question logic within your probing instructions. For example, if a section will be made irrelevant if certain topics are brought up by the participant, you can prompt the AI interviewer to skip to another section. Some of this will be up to your own discretion on whether you should just make two separate studies or include lots of logic in one, but it’s nice to have that flexibility with AI.
Interview Type Considerations
Just like with traditional user research, different types of interviews require different approaches. Given that, AI will either excel over or lag behind a human researcher in terms of budget efficiency and nuance of insights. AI is capable of conducting every type of interview and is highly effective at conducting large scale research quickly, but you should consider your goals in deciding how to use it.
Exploratory Interviews
Exploratory interviews will primarily call for open ended questions and flexibility with the interviewer. When designing for exploratory interviews, start by toggling the AI to have high follow-ups. The level of empathy you choose will depend on your topic of study, audience, goals, etc. For example, for sensitive topics, you may want the interviewer to display a higher level of empathy to help your participants feel more comfortable, but other studies may provide better data with a more disconnected interviewer.
When providing probing instructions, be sure not to box the interviewer in. AI will conduct interviews more rigidly than a human, which has its strengths, but can take more attention from the interview designer to match the minimum flexibility needed for an exploratory interview. Leave probing instructions open ended; just nudging the AI in a rough direction if needed, if providing a probing instruction at all.
AI can follow threads, but it won’t be building on a lifetime of human experience, so plan your studies accordingly. Some studies gather the most insightful data when an interviewer can draw on elements of the human experience to connect with the participant. In those her-specific cases, a human would be a better choice—but for the majority of exploratory interviews, AI will be able to handle the bulk of the work.
Usability Interviews
Usability interviews are where AI really shines and is often more cost-effective than employing humans to do the research. Since it’s so task based and generally not focused on building an empathetic connection, AI can do all the same things as a human researcher in a usability study. AI even outshines humans in some areas—opening up the doors to completely new research capabilities.
Take a recent study done with sheep farmers spread across the islands of New Zealand, for example. The developer of a new app for shepherds needed to conduct usability tests over the span of a couple weeks with over 40 shepherds living in remote areas around hundreds of square miles of New Zealand. That just wouldn’t be possible with traditional interviewing, but Outset helped get this study (plus another two follow-ups) completed in a matter of days.
AI still needs clear prompting, but the output is essentially the same as a traditional interview—“navigate to this page; click this button; how did that experience make you feel?” etc. AI is easily the fastest, most cost effective, scalable, and agile method of research in usability.
Concept Testing
Concept testing is another area where AI moderation can shine. With traditional research, you either have to use conjecture or very long surveys to get detailed information on user sentiment, or conduct a few detailed studies and run the risk of missing the mark due to a low sample size. AI moderated research changes this in some major ways.
Firstly, you unlock the opportunity to chase leads from a much larger and more diverse set of concept testers. Surveys limit you into the information you asked for from participants, but AI moderation allows you to explore participant ideas further. With AI, you can better anticipate reactions to pivots in design, gain stronger direction for how to design your next iteration, and have more confidence in your next move thanks to the richer set of insights.
It takes a bit more probing to pull good insights in concept testing. You should set the AI moderator to full exploration to get the most bang for your buck in these studies—where the more information you get, the more direction you gain. Effective probing instructions are helpful in directing the discussion on these longer AI-moderated conversations. The AI can stagnate on irrelevant topics if it doesn't get enough context. For the most part, though, you can just let the AI do its thing and get deep insights out of it!
Longitudinal/Diary Studies
AI offers some unique advantages in diary studies as well. Longitudinal studies produce much larger datasets than the other forms, which in turn becomes a huge timesink for researchers to parse through and pull insights. Being able to use AI to pull themes, sentiments, etc. in just minutes, researchers can spend much more time on drawing connections from the insights.
You also get the chance to guide the conversations on diary entries, which is pretty rare in traditional research. This empowers you to get more out of participants by letting the AI coax more detailed answers if the participant isn’t feeling too open on a particular day. Not only does that help populate the data from the study, but also helps ensure consistency in response quality across the course of the longitudinal study.
The interview guide may be more fluid here. In some entries you may want to probe more deeply, while in others you’re fine with shallow insights. Having the flexibility to do that at scale is useful for maintaining participant engagement by focusing on the things that deserve their attention.
Best Practices
Provide the AI context if needed
The better the direction you provide your AI moderator, the better it will be able to lead interviews for you successfully. With Outset, you can provide multiple types of context input to the moderator, in addition to guiding the direction of its probing. For example, if you have product-specific information you want your moderator to be aware of, you can inform it of that context. Outset also has built-in boundaries to prevent discussion of more sensitive topics, and you’re welcome to provide additional direction for the tone and language as needed.
Adjust your approach to fit the AI format
Rather than translating a human-led interview guide word by word, you may want to make some tweaks to make it more appropriate for AI moderation. For example, many in-person interviews or focus group sessions are 60 minutes, while we find that the most common interview length on Outset is ~25-30 minutes. We certainly have customers that run longer research, but engagement can wane sometimes with too many questions. On that note - keep it engaging! Varying the types of questions (e.g. conversational vs. multiple choice), adding visuals when relevant, and avoiding unnecessary repetition can all help to engage your participants when you don’t have a human in the room.
Don’t use AI for something that could have been a survey
AI is an incredible tool that can help bring user research to new heights, but it’s wasted on basic questions that don’t utilize dynamic follow ups. Like using a saw to cut a stick of butter, using AI to ask multiple choice questions really isn’t an effective use of the tool—unless you want to probe into participant selections following the closed ended questions. There are cheaper ways to run surveys.
Let the AI riff
You don’t need to script everything you want the AI to ask. You’ll need to be explicit with the AI to look for specific things if needed, but it will naturally follow threads throughout the interview without needing hand-holding. That said, it still won’t be able to automatically pick up on your intention behind the information you provide if it’s just a nebulous brain dump. Structure your questions as questions and provide the AI with specific prompts on what to dig for. Finally, don’t start doing things you wouldn’t do in a traditional interview, like asking double-barreled questions in the interview guide. It will follow any instructions you give diligently, so be mindful when giving them.
Don’t overload the AI with prompts
It will take everything you say as an input that needs to be incorporated. Too many prompts will restrict the agility of the AI and can lead to odd interview pathways, leading to missed insights. AI might not be quite as shiny as a human, but it’s more capable than you may think. Just give it enough prompting to keep it going in the right direction and let it do the rest.
Test your guide
All too often researchers send out interviews without testing their interview guides and need to completely rework their study after seeing the lack of usable data that comes in. The things to pay attention to here are: flow, comprehension, and engagement. With Outset, you can run through your guide from the beginning or starting from a specific question. Run through the interview yourself or have a coworker take it so you can see how it’s really making the participant feel and the kinds of responses you’re getting. This is also an area synthetic users can be useful. The ultimate study needs to interview human users, but synthetic users are an easy way to test for correct flow, structure errors, and poor questions/phrasing.
Conclusion & Next Steps
Building an effective guide for AI isn’t complicated, but it does take some consideration. There’s more to learn about AI-moderated research with Outset—like how to choose the appropriate depth of probing for the AI and how to give probing instructions. That’s not even including everything that comes with analysis—but you’ll need to book a demo with our team to learn about that ;)
Interested in learning more? Book a personalized demo today!
Book Demo