Integrating Knowledge Graph Reasoning with Pretrained Language Models for Structured Anomaly Detection
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper proposes an anomaly detection algorithm based on large language models guided by knowledge graphs. The method combines the deep semantic understanding capability of large language models with the structural modeling advantages of knowledge graphs, forming a semantic-structural co-driven fraud detection framework. Specifically, the model first encodes transaction texts using a pretrained language model to generate semantic representations. Meanwhile, a financial knowledge graph is constructed based on entities such as transaction accounts, devices, and IP addresses, from which structural embeddings are obtained via graph neural networks. Next, a gated fusion mechanism is introduced to adaptively integrate the semantic and structural vectors. The fused representation is then fed into a classifier to produce fraud prediction results. In multiple comparative experiments, the proposed model outperforms single-modality approaches across metrics including accuracy, precision, recall, and F1-score, validating its effectiveness in complex financial scenarios. In addition, this study conducts multi-task adaptation experiments and anomaly ratio variation tests to evaluate the model's stability and robustness. The results show that the proposed method maintains strong performance across different environments, demonstrating its reliability and practical value.