Interpretable Deep Knowledge Tracing and Visualization of Learner Progress with Attention-Based Models
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Modern intelligent tutoring systems increasingly depend on data-driven student models to monitor individual learning progress. While Deep Knowledge Tracing (DKT) models, based on recurrent or attention-based neural networks, have demonstrated superior predictive performance over traditional approaches, their latent representations often lack interpretability, limiting their utility for educators. This study proposes iSAKT + Behave (interpretable Self-Attentive Knowledge Tracing with Behavioural Features), an interpretable, attention-based knowledge tracing framework that extends both DKT and Self-Attentive Knowledge Tracing (SAKT) by integrating rich behavioral features (e.g., hint usage, attempt counts) and leveraging attention weights for visualization. Using the ASSISTments Skill Builder dataset and validating across public benchmarks (ASSISTments data), our enhanced models consistently outperformed baselines (DKT and SAKT). Specifically, Enhanced DKT achieved an AUC of 0.8681 (vs. 0.5297 baseline), and Enhanced SAKT reached 0.8998 (vs. 0.5086). Attention heatmaps highlight which past interactions most influence current predictions, offering intuitive, skill-level insights into student learning. The inclusion of behavioral indicators, such as bottom-out hints and repeated attempts, which are associated with disengagement, significantly improved performance and interpretability. These findings align with recent efforts to integrate cognitive models with DKT architectures, supporting the case for explainable artificial intelligence (AI) in education. This work contributes a novel fusion of deep learning, educational feature engineering, and visualization techniques to advance interpretable student modeling.