Enhancing Scalability and Transparency in AI-Driven Credit Scoring: Optimizing Explainability for Large-Scale Financial Systems

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The growing adoption of artificial intelligence (AI) in credit scoring has significantly enhanced predictive accuracy, but it has also raised concerns regarding transparency, fairness, and trust. The "black box" nature of many machine learning models used in financial decision-making can hinder understanding and accountability, particularly in high-stakes scenarios such as loan approvals. To address these challenges, it is essential to develop methods that improve the explainability and scalability of AI driven credit scoring systems. This study explores how the scalability of explainability techniques degrades with increasing data volume in tree-based ensemble models like XGBoost and investigates strategies to optimize performance, such as feature selection and model refinement. By applying these approaches to a dataset of 2.3 million loan applications from Lending Club, the research aims to provide insights into improving the efficiency and transparency of large-scale AI systems. The findings will contribute to more transparent, fair, and efficient credit scoring models, ensuring that AI-driven decisions are both interpretable and compliant with regulatory standards.

Article activity feed