Adaptivity Makes Feedback Effective: Evidence From AI-Generated Feedback on Children’s Plans

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models (LLMs) show great promise for providing effective feedback in educational contexts. A key strength lies in their ability to generate feedback that is tailored to individual learner responses. Positive effects of LLM-generated feedback are often attributed to adaptivity as an implicit system advantage, but this assumption is rarely tested directly. In this study, we examined whether the contingent adaptation of feedback to students’ individual responses uniquely contributes to learning beyond well-designed generic guidance. To this end, we compared the effects of LLM-generated adaptive feedback with expert-generated but non-adaptive guidance across six planning tasks. Results from a sample with 155 children (Mage = 12.08 years) indicate that the quality of children’s plans improved significantly more after LLM-generated feedback than after generic guidance. Furthermore, LLM-generated feedback was perceived as more helpful and more motivating, and these perceptions in turn explained plan quality improvements after feedback. Together, these findings provide direct evidence that contingent adaptation to learners’ responses constitutes a key mechanism underlying feedback effectiveness. By isolating adaptivity as a core design feature, the study advances our theoretical understanding of feedback mechanisms and illustrates how LLMs can be leveraged to provide effective, pedagogically sound feedback.

Article activity feed