From Gatekeeper to Architect: Operationalizing AI as a Cognitive Partner in Higher Education
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In higher education, debates about Generative Artificial Intelligence (GenAI) often polarize around academic integrity risks versus efficiency gains. Missing from both accounts is a core instructional imperative. Large Language Models (LLMs) must be deliberately engineered to support higher-order cognitive development at scale rather than replace it. We present a simulator-based framework that operationalizes LLMs as constrained instructional instruments grounded in cognitive science and learning theory. The architecture decomposes the instructional process into three discrete phases (i.e., structured comprehension, schema-driven application and analysis, and synthesis under epistemic constraint), each instantiated via a dedicated, task-specific simulator. Central to this design is the instructor’s role as the sovereign epistemic regulator, defining knowledge granularity and uncertainty thresholds through precise parameterization and text selection. Rather than asserting a solution to Bloom’s 2-Sigma Problem, this framework demonstrates how LLMs can scale the specific instructional mechanisms associated with individualized tutoring (e.g., contingent feedback, error diagnosis, and iterative refinement). This approach provides a theoretically grounded, pragmatically constrained protocol for integrating GenAI into higher education, ensuring scalable cognitive development without compromising pedagogical sovereignty.