Causal Attention Graph Knowledge Tracing

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Graph-based Knowledge Tracing (GKT) is a prominent variant of Graph Neural Networks (GNNs) utilized in the field of knowledge tracing. It effectively redefines knowledge tracing as a time-series node-level classification problem within GNNs by modeling coursework as a structured graph. However, it is hard to distinguish causal and non-causal relationships for GNNs, because most GNNs adhere to the ‘learning to attend’ paradigm, which may result in broad and non-selective handling of the relationship between features and goals. This leads to the poor generalization of GNN classifiers. To enhance the generalization and robustness of model classification, we propose a knowledge-tracking model called Causal Attention Graph-based Knowledge Tracing (CAGKT). The method constructs a causal structural model to account for causal relationships between variables and eliminates confounding factors by cutting off backdoor paths, and utilizes causal features for final prediction. Subsequently, the Temporal Convolutional Network model is used to extract more comprehensive concept features of the student at each moment. Then the attention module aggregates the features of neighboring nodes to update the graph features. Finally, the graph features are decoupled into causal and shortcut features, and students' answers are predicted based on causal features. Comparison and ablation experiments on open datasets show that the proposed method can improve student achievement prediction and outperform existing models in terms of performance.

Article activity feed