Automating Personality-Based Employment Interviews: Development and Validation of an Artificial Intelligence Chatbot

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study examines the use of artificial intelligence (AI) chatbots and natural language processing methods for administering and scoring personality-based employment interviews. We adapted a behavioral description interview to a chatbot interview format and evaluated the construct and criterion-related validity of machine-derived personality scores. Using archival data as a baseline, the study incorporated natural language processing (NLP) methods, including word embeddings extracted with transformers and zero-shot prompt-based scoring using a large language model (LLM). Three key findings emerged. First, chatbot interviews generated significantly lower interviewee word counts than human interviews, limiting trait-relevant cues for raters and machine-based methods. Second, construct validity results demonstrated moderate convergence between machine-derived and human rater scores, with LLM-based scores performing comparably to human ratings. However, limited discriminant validity suggests that method effects outweigh trait-specific variance. Third, machine-derived scores demonstrated incremental validity in predicting organizational citizenship behaviors (OCB) beyond self-reported personality scores, underscoring their potential utility in selection contexts. These findings emphasize the need for refinements in chatbot design to elicit richer responses and improve scoring accuracy, offering promising implications for scalable and efficient personality assessments in organizational settings.

Article activity feed