AI-Based Learning Platforms: A Systematic Review of Evaluation Metrics for Accessibility, Interactivity and Adaptability through the Lens of Universal Design for Learning

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial Intelligence is reshaping the design, evaluation, and personalization of digital learning environments, enabling adaptive and data-driven pedagogies that respond to diverse learner needs. In parallel, the Universal Design for Learning (UDL) framework has become central to inclusive education, offering principles to ensure accessibility, engagement, and multiple means of representation. Despite this convergence, systematic analyses that evaluate AI-based learning platforms through UDL remain scarce. Existing reviews on AI in education have primarily focused on algorithmic efficiency, adaptive architectures, or technological innovation. However, they lack an analytical framework that connects AI-based learning technologies with UDL principles, largely because they do not articulate the dimensions needed to operationalize these principles into evaluative criteria. As a result, most AI-driven platforms are assessed in terms of technical performance rather than pedagogical inclusiveness, usability, accessibility compliance, or learner engagement. To address this gap, this study conducts a systematic review of evaluation metrics applied to AI-based learning platforms, following the PRISMA methodology and analyzing peer-reviewed studies published from 2019 onward. Using UDL as a conceptual and analytical scaffold, the review structures its synthesis around three operational dimensions derived from the framework: accessibility (representation), interactivity (action and expression), and adaptability (engagement and motivation). Building on this analytical approach, the purpose of the review is twofold: first, to map the state of the art in evaluating AI-driven learning platforms through both normative and algorithmic metrics; and second, to propose an integrative model that links international standards with user-experience indicators and adaptive performance measures. In doing so, the study contributes a structured evaluative perspective that bridges technical methodologies and pedagogical frameworks for inclusion, advancing the development of more equitable, transparent, and accessible AI-based learning systems.

Article activity feed