The Construct Validity of Automated Written Interview Competency Assessments
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
We report evidence on the construct validity of automated written interview competency assessments (AWI-CAs). Participants (N = 117) provided written responses to 12 open-ended interview questions designed to assess four behavioral competencies (Collaboration, Communication, Leadership, Critical Thinking) along with a neutral question. Responses were scored by a large language model (LLM) on an 11-point scale. Scores were substantially intercorrelated, including correlations with the neutral question, suggesting substantial common method variance. Accordingly, we partialled out the neutral-question score from each behavioral question. Principal axis factor analysis with varimax rotation yielded a three-factor solution: Communication and Leadership loaded on a common factor, whereas Collaboration and Critical Thinking formed distinct factors. In addition, the Communication/Leadership factor was primarily associated with Open-Mindedness, the Collaboration factor with Conscientiousness, and the Critical Thinking factor with verbal reasoning. Overall, these findings support the construct validity of automated behavioral interviews beyond a general response tendency.