The Human Element: What AI Can Learn from Traditional Interviews

Jun 3, 2024

AI interview
AI interview
AI interview

We set out to understand what makes human interviews great and where AI moderated research could learn from human-moderated research. And more specifically, we wanted to look at whether AI moderated research should bring more human elements to the experience. 

TL;DR

Human interviewing is great because of the uniquely human elements of the experience while AI interviewing is great because there isn’t a human present (e.g. lack of judgment, async flexibility, etc). In other words, participant opinion suggested that both methods should lean into what makes them different and special. 


The overwhelming preference from the 114 people interviewed was to keep the AI moderated experience neutral and not attempt to overly ‘humanize’ the experience, largely citing how it maintains a clear distinction between what’s human and what’s technology.

(Note - this is not an academic study. This is a qualitative study aimed at gaining insights into the strengths of human and AI-moderated interviews.)

Below, we outline some key findings. For the complete write-up, download the full report at the end of this page!

Here's what we did

  • 114 individuals aged 24-70 in the US who had recently participated in a traditional one-on-one qualitative interview with a human researcher

  • We recruited from a general population using a recruiting panel (Respondent.io).

  • Method: Participants took Outset AI interviews with Video Response (where AI offers a question in text and participants respond with a video).

    • They were asked to reflect on their recent interview, describe their ideal research experience, compare human and AI research experiences, and discuss the role of various human attributes.

    • Outset’s AI system dynamically probes deeper with follow up questions to participant answers depending on the thoroughness of the question either 0, 1, or 2 times per answer.

Here's what we found

1. The best part of human moderated sessions is feeling heard and valued by a friendly (human) interviewer.

Most participants had a positive experience in their recent interview. Specifically, they enjoyed talking with friendly interviewers about topics they’re mutually interested in. Participants noted appreciating interviewers who actively listen, ask engaging questions, and foster a comfortable, non-judgmental environment – in fact, the comfort they felt in the communication with the interviewer was a key theme with the majority of participants when asked what they enjoyed.  

Q: What would make the participant feel most comfortable in their ideal research experience? (screenshot from Outset analysis)

  • “Basically, whenever I would express my thoughts to the researcher, they would ask more questions, maybe, why do you think that is, could you elaborate more on that, things like that, and it seemed like they were really open to hearing what I had to say.”


  • “Just a friendly interviewer, straightforward questions, that's pretty much the only thing that really gets me to share my feedback.”


  • “The researcher made me feel valued and respected by listening carefully to what I had to say, showing genuine interest in my thoughts, and not interrupting or judging me. Their friendly and non-judgmental attitude created a comfortable atmosphere where I felt free to share openly.”

Some shared situations where they felt uncomfortable in interview settings. For example, feeling vulnerable discussing personal questions or feeling like the researcher had a “predetermined agenda” and wasn’t genuinely listening. Some wondered about the goal of the research and why they were picked because they weren’t sure their feedback was helpful.

  • “When the interviewer asked personal questions, I felt a bit vulnerable. Sometimes, I struggled to express myself clearly, which made me self-conscious. Also, pauses in the conversation made me unsure if I was doing something wrong. Despite these moments, the interviewer's supportive approach helped me feel more at ease.” 


  • “One thing that made me uncomfortable was when researchers seemed to have a predetermined agenda and were more focused on fitting my responses into their preconceived notions rather than genuinely listening to what I had to say. It made me question whether my input was truly valued or if they were just looking to confirm their existing hypotheses and ideologies.”

In other words, the humanness of the interviews comes through – whether it’s the human-to-human warmth and connection or unconscious biases and strong incoming hypotheses the interviewer brought to the table. 


2. For more personal or sensitive areas we saw a split preference, with some preferring AI and some human.

Some prefer to talk to a human because they are looking for more emotional support, personal connections, and nuanced understanding that comes from seeing and hearing from another person.

Some, though, prefer interacting with A.I. for sensitive or personal topics because it offered greater anonymity and lacked judgment or perceived bias.

  • “I think sometimes talking about personal situations or embarrassing situations might be easier to talk to an AI interviewer rather than an actual human because an AI interviewer isn't going to judge you or make you feel badly about yourself or even react in a nonverbal way about something you said. So it might be even easier to say more personable things in an interview like this rather than a one-on-one interview with someone in person on Zoom.”


  • “It's not like looking a stranger in the eye and talking about secrets. It's like journaling alone”


3. Participants found the AI moderated experience convenient and engaging (though, at times felt long)

Most participants reported a positive experience in the AI-conducted interview. This is consistent with a previous study with Outset participants. They found it engaging and liked the convenience of doing the interview at whatever time worked best for them. Many mentioned the accuracy of the AI in sharing back and building on their responses. Participants generally provided lengthy responses and a mix of positive and negative reactions, which also indicates a certain level of comfort responding honestly.

  • “This interview experience was fantastic because I felt that I could share my thoughts without worrying I would go over my time or under-provide. The interpretation and understanding of my responses is clear. Even with run-on thoughts, the key points were clearly captured. And it is convenient, easy and accessible.”


  • “So far I think that even though this is an AI interview, the responses by the interviewer, even though it's AI, have been reasonable and prompt and in line with the answers that I gave, so I felt good about that.”


  • “It's intriguing because you're catching on to what I'm saying. You're understanding the information I'm giving you and the details and you're able to respond back even though you're not an actual person so it's interesting and intriguing.”

Some shared frustrations that the questions felt repetitive while others wanted more guidance that they were responding in helpful ways and confirmation that their answers were recorded correctly. 


4. Participants don’t want AI to pretend to be human

When asked to choose between the current experience and adding human attributes like a photo, image or avatar, 69% preferred the current experience. They prefer to maintain a clear distinction between humans and technology, expressing concerns over blurring the lines between machine and human.

  • “I don't think making AI more human-like or giving it more human characteristics is going to help me open up and share more. I think I'd be answering the questions the same way I'm answering them now, and I don't feel like it would make it more personable or relatable…”


  • “What do I think about personifying the AI a bit, giving it a name, human traits, etc.? In this particular interview, research study, I think there isn't a whole lot of need for that, but I do think that there could be value for some of that to happen depending on the topic. Very often, however, that personification feels forced and doesn't really fit what you're trying to do. I think it would have to be flexible depending on the need.”

Q: Which option do you prefer for this type of interview?


Wrap up

AI interviewing platforms should not add human attributes. AI and human interviewers each have their strengths and they should each lean into those strengths respectively. 

A.I can replicate a lot of what participants like about human researchers. Its tone is warm and friendly. It analyzes what they say and builds on their responses in ways that are similar to active listening. A.I. A.I. remembers what was said in earlier parts of the session and makes connections. It’s programmed by a human too, so it’s never far from the researcher themselves. It’s also a machine, so it doesn't have feelings to hurt, favorite ideas to push, or judgment to impart. 

But, it’s also not human and adding human cues like a name, image, background story adds noise to the study. Those elements may be helpful when building a connection to an AI assistant, coach, or companion, but in the context of research, any intervention should be carefully considered because it also introduces bias. Bias can be both participant and topic dependent, challenging how cleanly it can be pulled apart and if it’s worth the noise. 

“I don't know. I think the idea of this is to not talk to a human and now you're [thinking of making] it more human-like, which for me, who would prefer this to talking to a human, why would you want to make it more human-like? I like it the way it is. I think if you made a cute little emoji like a cat or a flower and gave it a name like Cheeto the cat, I would talk to her about anything, but if you make it more like a human then it's gonna make it more like a human and I feel like that's what we're trying to avoid here. So no, I don't really like the idea [of adding human attributes].”

While this study explores the impact of human-like attributes on AI-moderated interviews, the learnings are likely relevant to the broader ecosystem of AI products being built today. We’re already seeing different approaches proliferate – OpenAI’s chat interface’s name is ChatGPT while Quora’s chatbot is called Poe. Though there is likely not one answer to this question, there is a broader lesson to be taken away for all AI products – lean into what makes the tool strong, but don’t always default to ‘the more human the better.’ Users (or participants) are more sophisticated than you might think.