AI-Calibrated Metacognition: How Genre-based ChatGPT Feedback and Interaction Shape L2 Writers' Metacognitive Judgments and Self-regulation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large language models (LLMs) increasingly mediate L2 writing feedback, yet we know little about how LLM output reshapes learners’ decision-making. This qualitative multiple-case study examines how genre-based ChatGPT feedback and dialogue shape novice L2 writers’ metacognitive judgments (MJs)—their bases and calibration—and subsequent self-regulated learning (SRL). In a first-year composition course, nine international students completed three genre-based assignments and engaged in structured AI feedback cycles using Genre Guru, a custom GPT grounded in genre theory. Data included reflections, ChatGPT logs, and five post-semester interviews. Framework analysis traced MJs across four genre-specific knowledge domains (formal, rhetorical, process, subject-matter) and mapped them to SRL phases (forethought, performance, self-reflection). Four themes emerged: (1) skepticism shifted to measured trust; (2) students critically evaluated AI suggestions, preserving text ownership; (3) writers integrated the four domains and articulated genre awareness; and (4) affect and motivation drove SRL cycles. Findings suggest that LLM-mediated feedback can cultivate AI-calibrated metacognition (AIM): iteratively using AI output and dialogue as fallible evidence to recalibrate self-judgments and to translate them into self-regulated control while retaining authorship.