A Pedagogical Framework and Its First Classroom Implementation in Response to Automation Bias, Cognitive Debt, and the Verification Paradox

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Generative AI (GenAI) has become cognitive infrastructure in higher education, yet creates a verification paradox: student reliance peaks where task complexity is highest, objective accuracy lowest, and perceived correctness remains inflated (46-point calibration gap). This paper presents the ACTIVE Framework, which is a six verification principles operationalized as a five-step workflow (Assess, Constrain, Inspect, Verify, Explain), and its first classroom implementation at Deggendorf Institute of Technology.Grounded in a three-wave longitudinal study (N=21, 36 and 23) documenting automation bias, plausibility pressure, and metacognitive miscalibration, the framework was delivered through a two-module lecture design across engineering disciplines. Key components include explicit verification protocols, human-in-the-loop documentation, confidence calibration training, and a gradable assessment architecture that renders verification teachable.The implementation demonstrates feasibility and acceptability in real course conditions while identifying common student failure modes (source-monitoring errors, fluency substitution). Unlike generalized AI literacy advocacy, ACTIVE offers a replicable instructional model with embedded assessment of the otherwise invisible verification process.

Article activity feed