ARPG+: Teaching Students to Ask Effective Questions for Educational LLM Use
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Despite widespread adoption of large language models (LLMs), most students cannot effectively prompt them. The core challenge is teaching students how to ask: transforming prompting from trial-and-error guessing into a systematic, transferable skill. Existing solutions, such as static templates, rule-based hints, and automated rewriting, either ignore individual learning needs or optimize outputs without building competence, leaving students dependent and unable to generalize. ARPG+ is a real-time coaching system grounded in cognitive load theory and zone of proximal development that senses when learners struggle, delivers calibrated just-in-time interventions, and fades support as skills develop. The system tracks learner capability with uncertainty quantification, estimates cognitive overload from behavioral signals, diagnoses prompt quality across six dimensions, and adapts scaffolding intensity through a dynamic schedule with periodic skill probes. A lightweight-deep dual architecture ensures fast responsiveness for routine interactions while reserving richer analysis for critical moments. Evaluation with simulated learners shows ARPG+ produces improvements: prompt quality increases 143% beyond unguided practice, learners achieve independence in 91% of final interactions versus 59% under fixed support, and the approach generalizes to other domains without retraining. Our work establishes that principled real-time coaching can improve prompt quality, accelerate learning, prevent cognitive overload, and foster durable autonomy.