Unpacking the Impact of Generative AI Feedback: Divergent Effects on Student Performance and Self-Regulated Learning
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Generative AI (Gen AI) feedback offers unprecedented opportunities to personalize and scale feedback in educational contexts. However, its effects on student learning may vary depending on the context and how it interacts with students' self-regulated learning (SRL) processes. This study evaluates the impact of automated, personalized Gen AI feedback in a college-level object-oriented programming course, focusing on how it affects student performance after different types of errors: compiler errors and failed unit tests. Although Gen AI feedback improved performance following compiler errors, it had a likely negative effect when provided for unit test failures. Mediation analysis revealed that Gen AI feedback reduced the likelihood that students viewed detailed feedback on their errors, which hurt their performance. This suggests that adding automated feedback using Gen AI sometimes disrupts essential SRL behaviors like self-evaluation and strategic planning, leading to diminished code improvement. These findings suggest that the effectiveness of Gen AI feedback is context-dependent and highlight the need to design feedback systems that foster SRL while minimizing unintended negative consequences.