Interpretable Ensemble Learning Models for Credit Card Fraud Detection

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

With the growing advantages and conveniences provided by digital transactions, the financial sectors also face a loss of billions of dollars each year. While the use of credit cards has made life easier and convenient, it has also become a significant threat. Detecting fraudulent transactions in financial sectors, such as banking, is a major issue because existing fraud detection methods are rule-based and unable to detect unknown patterns. The tactics and techniques used by fraudsters are far more advanced than they are, making machine learning (ML) a valuable approach to improve detection efficiency. While numerous studies have explored machine learning models for credit card fraud detection, most have prioritized accuracy metrics alone, offering little attention to how or why models make decisions. This lack of interpretability creates barriers for financial institutions, where regulatory compliance and user trust are critical. In particular, the systematic application of explainable AI (XAI) techniques such as SHAP and LIME to fraud detection remains scarce. This study addresses this gap by combining high-performing ensemble models (Random Forest and XGBoost) with advanced interpretability methods (SHAP and LIME), providing both strong predictive performance and transparent feature-level explanations. Such integration not only improves fraud detection but also strengthens the trustworthiness and deployability of AI systems in real-world financial contexts. A real-world credit card dataset is used to evaluate both models, and experimental results show that Random Forest achieved higher precision (89.09%) and F1 score (0.9159), while XGBoost yielded better recall (95.56%) and ROC AUC (0.9997). To address the crucial need for interpretability, SHAP and LIME analyses were applied, revealing the most influential features behind model predictions and enhancing transparency in decision-making. Overall, this study demonstrates the potential of integrating explainable artificial intelligence (XAI) into fraud detection systems, thereby enhancing trust and reliability in financial institutions.

Article activity feed