Network-Enhanced Neuro-Symbolic Framework for Financial Fraud Detection: Integrating Corporate Network Topology with Interpretable AI

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Corporate fraud imposes substantial costs on investors and the broader economy, yet existing detection methods face a fundamental trade-off between accuracy and interpretability. Traditional rule-based approaches provide transparency but achieve limited performance (F1-scores 0.55-0.65), while machine learning models demonstrate superior accuracy (87-94%) but lack regulatory-required interpretability. We address this challenge by proposing a network-enhanced neuro-symbolic framework that integrates corporate network topology with interpretable AI. Using 15,724 restatement cases from 9,951 U.S. companies (2000-2024), we construct a board interlock network from 561,306 director records and extract network features that capture fraud contagion patterns. Our framework combines LSTM neural networks for pattern recognition, domain-specific symbolic rules for transparency, and an attention mechanism for optimal integration. Results demonstrate that network features contribute 31.2\% of predictive power, with fraud neighbor ratio emerging as the strongest predictor. The integrated framework achieves superior performance (F1=0.891) compared to pure neural (F1=0.723) or symbolic (F1=0.651) approaches, while maintaining interpretability through SHAP analysis and symbolic rule activation. Temporal validation across 15 time periods confirms model robustness. Implementation shows processing times under 2.3 seconds per company with 38% reduction in audit time. These findings establish network topology as a fundamental fraud detection signal and demonstrate that neuro-symbolic integration can resolve the accuracy-interpretability trade-off in regulated environments.

Article activity feed