Automating the Assessment of Collaborative Engagement Using Natural Language Processing
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study explores the use of natural language processing to automate the evaluation of collaborative group engagement in collaborative learning settings. We evaluated two approaches involving large-language models (LLMs) and a set of interpretable linguistic markers to predict the four dimensions---behavioral, social, cognitive, and conceptual-to-consequential engagement---of the quality of collaborative group engagement (QCGE) model. Analyzing conversation transcripts from three-person student groups engaged in a computer-supported collaborative design task produced four major findings. First, natural language processing successfully predicted out-of-sample about 10\% of variance in collaborative engagement, suggesting that assessment can be automated. Second, interpretable linguistic markers explained more of this variance than did ratings from an intransparent LLM. Third, the best linguistic markers were not specifically related to any individual dimension of QCGE, suggesting a common core of collaborative engagement. Fourth, the QCGE's manual rating model was limited due to its lack of granularity and insensitivity to natural variability in engagement. Overall, our analysis demonstrates both how natural language processing approaches can be leveraged to successfully automate the assessment of collaborative engagement and how these approaches can reveal key insights into the drivers of collaborative engagement.