From Black Box to Glass Door in Artificial Intelligence for Education

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial intelligence (AI) is reshaping education through intelligent tutoring systems (ITS), technology-enhanced learning (TEL), and predictive analytics. These tools have the potential to personalize learning, improve access, and support progress towards the United Nations Sustainable Development Goal 4 (SDG 4: Quality Education). However, the rapid adoption of AI raises concerns about transparency, fairness, and accountabil-ity. When systems operate as “black boxes,” learners, teachers, and policymakers struggle to understand or challenge decisions that affect educational outcomes. This paper examines the importance of transparency and interpretability in educational AI, with a focus on explainable AI (XAI) as a means of making systems more open and trustworthy. It reviews current advances, identifies ethical risks, and highlights re-al-world cases where transparency improved adoption and outcomes. Building on these insights, the paper introduces Model-Context-Protocol (MCP) and human-in-the-loop (HITL) approaches as strategies to enhance explainability and safeguard accountability. Success stories from intelligent tutoring and decision-support systems illustrate how these practices strengthen trust and outcomes. The study argues that transparency and interpretability are essential for building sustainable, equitable, and trustworthy AI in education.

Article activity feed