Integrating Attention Mapping and Legal Prior Knowledge for Interpretable Legal Reasoning with Transformer-Based Models

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study addresses the issues of insufficient interpretability and opaque reasoning logic in legal artificial intelligence by proposing an explainable legal reasoning model that integrates attention mapping with legal prior knowledge. The model is built on a Transformer encoder architecture, where the self-attention mechanism enables deep semantic modeling of legal texts, and a legal prior matrix constraint is incorporated during reasoning to align attention distribution with the logical structure and citation relationships of legal provisions. The model establishes a complete reasoning chain from text input and semantic encoding to prior fusion and interpretable projection, ensuring that predictions are both accurate and logically traceable through visualization. Using the Case Classification subset of the LexGLUE dataset, the study conducts systematic validation and compares the proposed model with multiple baselines across accuracy, precision, recall, and F1-score metrics. Experimental results show that the model achieves higher stability and consistency in legal text classification tasks and demonstrates significant improvement in attention-based interpretability. Furthermore, multidimensional sensitivity experiments involving hyperparameters, data, and environmental factors confirm the model's robustness and reasoning soundness under varying conditions. Overall, the findings indicate that embedding legal prior knowledge into deep language model structures effectively strengthens logical consistency and interpretive reasoning in legal contexts, providing a solid technical and theoretical foundation for intelligent legal analysis.

Article activity feed