A Systematic Review of Deep Knowledge Tracing (2015-2025): Toward Responsible AI for Education

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background and Objectives: Tracking and adapting to learners’ evolving knowledge is essential for effective teaching. In digital learning, Deep Knowledge Tracing (DKT) employs deep neural networks to analyze sequential learner interactions, model their evolving knowledge, and predict skill mastery over time. While DKT is widely studied, its real-world adoption remains limited. This review examines DKT research from 2015–2025 through the lens of responsible AI principles, investigating modeling trends, evaluation practices, input features used for representing learner performance and context, strategies for mitigating data quality issues, assessment of sequential stability (consistency of knowledge estimates over time), and interpretability for educators. Methods: Following PRISMA guidelines, five major scholarly databases (Web of Science, Scopus, ScienceDirect, ACM Digital Library, IEEE Xplore) and Google Scholar were searched, yielding 1,047 peer-reviewed articles. After two rounds of screening and a quality appraisal focused on methodological rigor, 84 studies were included in the final synthesis. Results: Graph-based architectures were most common (26.2%), followed by Hybrid/Meta (23.8%) and Attentive models (17.9%). ASSIST datasets were used in 82.1% of studies, and 90.5% predominantly used Area Under the Curve (AUC) for evaluation. A wide variety of input features were used, ranging from basic question–answer pairs and knowledge concepts to time-based metrics, difficulty levels, behavioral indicators, and learning resource interactions. Approaches to address data quality challenges appeared in 44.0% of studies. Only 3.6% quantitatively assessed sequential stability of predictions. Interpretability techniques—designed to make predictions understandable to educators—were present in 11.9% of studies. Conclusions: Current DKT models often overlook responsible AI principles, including robust handling of data quality issues, assessment of sequential stability of predictions, and interpretability of predictions. As AI regulatory frameworks increasingly mandate trustworthy and interpretable AI in education, future research should prioritize these principles for practical and responsible deployment.

Article activity feed