Systematic Review of Explainable AI Techniques in Software Engineering and Decision-Making
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Particularly in domains where accountability, openness, and trust are critical, artificial intelligence (XAI) has become a vital subject of study. Serious worries over the "black-box" nature of these algorithms have been raised by the increasing use of deep learning and machine learning models in software engineering and decision-making. When interpretability is poor, there are risks in automated decision support, software quality assurance, debugging, and regulatory compliance. In this study, XAI methodologies used in software engineering and decision-making from 2010 to 2025 are systematically evaluated using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) framework. 705 records were found after searching Scopus, Web of Science, IEEE Xplore, ACM Digital Library, PubMed, and Google Scholar. 98 of these records were included after a rigorous screening procedure. The findings indicate that the most popular strategies in the sector include rule-based techniques, attention-based architectures, SHAP, and LIME. Decision support, risk assessment, optimization, performance evaluation, automated testing, and bug prediction are a few examples of applications. Despite tremendous progress, standardizing assessment metrics, guaranteeing scalability, and incorporating XAI into DevOps pipelines are still difficult problems. This paper creates a conceptual taxonomy of XAI in software engineering and decision-making, brings diverse knowledge together, and lays out a research agenda to create AI solutions that are human-centered, interpretable, and scalable. This paper fills a major research vacuum by combining Explainable AI (XAI) applications from software engineering and decision-making, two fields that have been studied separately. Unlike previous fragmented evaluations, it provides a synthesis (2010–2025) based on PRISMA 2020, outlining a taxonomy of XAI methods, applications, and barriers to human-centered and scaled AI solutions. Overall, being the first PRISMA-based synthesis to systematically connect Explainable AI (XAI) methods with software engineering and decision-making domains, this review fills a critical research vacuum.