Governing Agentic AI in Global Education: A Justice-Oriented Framework for Alignment, Responsibility, and Distributed Autonomy
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The transition from responsive AI systems to action-capable AI agents raises urgent governance questions in education, where algorithmic systems increasingly shape pedagogical, evaluative, and institutional processes. This study conducts a qualitative systematic literature review of 71 peer-reviewed studies (2020–2024), applying reliability-validated thematic coding and a layered SWOT synthesis (Supplementary Tables A1–A4) to examine how AI-driven systems operate within sociotechnical learning environments. Addressing three research questions, the findings map (RQ1) the empirical positioning of educational AI along an Autonomy Gradient, from assistive tools to delegated decision-making systems; (RQ2) internal strengths and weaknesses, including personalization capacity, scalability, algorithmic bias, and data opacity; and (RQ3) external ecosystem conditions such as regulatory reform, platform concentration, and structural inequality. Integrating these results with a Digital Equity Evaluation Framework (DEEF), the study advances three propositions: (1) governance concerns scale with system autonomy; (2) alignment in educational AI is structurally mediated by digital inequality; and (3) responsibility becomes distributed as agentic systems embed within institutional infrastructures. By linking autonomy, equity, and alignment within a layered governance architecture, the study contributes to emerging scholarship on agentic AI ethics, offering a conceptual and methodological foundation for evaluating action-capable systems in public educational institutions.