Methodological Challenges in Explainable AI for Fraud Detection: A Systematic Literature Review

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As complex black box Artificial Intelligence (AI) models become integral to high-stakes domains like fraud detection, the need for transparency is critical. Explainable AI (XAI) aims to make algorithmic decisions understandable to stakeholders. While systematic reviews have mapped the use of XAI in the broader financial sector, a focused synthesis of the methodological challenges unique to fraud detection remains a critical gap. This systematic literature review addresses this gap, synthesizing findings from 49 peer-reviewed articles to identify current practices and foundational challenges. Our analysis reveals that while post-hoc methods like SHAP and LIME are prevalent, their application is undermined by two systemic methodological flaws that threaten the validity of current research. First, we identify an explainability-imbalance paradox, where common data resampling techniques used to manage class imbalance inadvertently compromise the fidelity of post-hoc explanations. Second, we uncover a profound evaluation vacuum, with over 80% of the analyzed studies using model predictive performance as a proxy for explanation quality, rather than directly evaluating the explanations themselves. Based on these findings, we propose a research agenda to guide the field toward more robust evaluation standards and the development of explanation-aware data processing methods.

Article activity feed