Designing Emotion Regulation Support in Online Group Learning: Insights from an LLM-Based Support Agent
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Online group learning (OGL) may be affected by socio-emotional challenges associated with social and interactional barriers in online settings, which can elicit negative emotions among learners. Effective emotion regulation (ER) appears to be a crucial factor in supporting productive collaboration. Recent advances in artificial intelligence (AI), particularly large language models (LLMs), offer potential avenues for ER support in OGL; however, empirical guidance on the design and implementation of such tools remains limited. To begin addressing this gap, the present study examined the use of a default GPT-4 chatbot implemented within an OGL setting as an ER support agent. Chatbot outputs and user experience survey responses were analyzed using a mixed-methods approach combining deductive content analysis, qualitative thematic analysis, and descriptive quantitative measures. Results indicated that most chatbot outputs contained theory-aligned ER components, with socially shared and co-regulated learning strategies occurring more frequently than individual-level ER strategies. User experience findings indicated moderate usability and mixed perceptions of the chatbot’s effectiveness, with qualitative feedback emphasizing the influence of delivery characteristics such as timing and verbosity of the chatbot’s responses. Taken together, the findings suggest that while default LLM-based agents may offer a feasible foundation for ER support in OGL, careful interaction design and theory-aligned refinement are critical for enhancing acceptability and practical value.