Using Large Language Models to perform theoretically informed semi-structured interviews
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large Language Models (LLMs) are very promising when applied to conduct interviews, typically implemented through a conversational chat interface. These “chatbot interviewers” can be used to (a) collect large-scale in-depth opinion data and (b) run prompt experiments with personalized conversational interventions. In this work, we use the open-source Python package boterview, designed for social scientists, to conduct chatbot interviews in an experimental setting. To investigate whether chatbot interviewers can develop theoretical sensitivity, we ran a pilot interviewing experiment on 26 university students and staff, compared to a group of 12 student interviewers, on the topic of a Universal Basic Income. Based on a content analysis of interviewer questions, we find that chatbot interviewers provided with theoretical background information are more likely to ask theoretically relevant questions, and are more likely to pick up theoretically relevant cues from interviewees. On the other hand, compared to (in-person) human interviewers, responses to (written) chatbot interviewers are shorter and less informative, and chatbots tend to more strongly stick to the script. We conclude by discussing avenues for improving the technical abilities and theoretical sensibilities of chatbot interviewers.