Application and Effectiveness Evaluation of Federated Learning Methods in Anti-Money Laundering Collaborative Modeling Across Inter-Institutional Transaction Networks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We propose the Graph Foundation Model (GFM): performing self-supervised contrastive pre-training on heterogeneous account-merchant-geo-device graphs locally within each institution. This achieves cross-institutional knowledge transfer and privacy protection through federated learning + secure aggregation + DP-SGD (ε≤3.0). On 95 million transactions across 5 institutions, GFM—deployed as a freezing backbone + lightweight adapter—achieved 23–31% higher PR-AUC and 9–13 percentage points higher Recall (with Precision fixed ≥0.92) compared to independently trained GNNs per institution. For open-set detection, it demonstrated 18–24% higher energy score detection rates for novel typologies. Communication and training overhead were controlled at ≤40MB per round, with total duration reduced by −27%. Grouped SHAP and subgraph attention provided auditable explanations. This demonstrates that federated self-supervised pretraining can significantly enhance AML generalization performance without sharing raw data.

Article activity feed