Interpretable Ensemble Learning Models for Credit Card Fraud Detection

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

With the growing advantages and conveniences provided by digital transactions, the financial sectors also face a loss of billions of dollars each year. While the use of credit cards has made life easier and convenient, it has also become a significant threat. Detecting fraudulent transactions in financial sectors like banking is a major issue because the existing fraud detection methods are rule-based and unable to detect unknown patterns because the tactics and the techniques used by fraudsters are way more advanced then they are, making machine learning (ML) a valuable approach to improve detection efficiency. The present study implements and compares the performances of two widely used ML algorithms, Random Forest (RF) and Extreme Gradient Boosting (XGBoost), to identify credit card fraud. To improve model transparency and interpretability, two explainable AI techniques, SHapley Additive exPlanations (SHAP) and Local Interpretable Model- Agnostic Explanations (LIME), are applied. A real-world credit card dataset is used to evaluate both models based on the standard key metrics such as accuracy, precision, recall, and interoperability. This study supports the integration of explainable artificial intelligence (XAI) into fraud detection systems to improve trust, reliability, and real-world applicability in the financial sector. These findings will help to choose the best model among them for the best scenario in the real world.

Article activity feed