Transparency and Explainability Focus: Making AI Decisions Interpretable to Humans
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This report presents a thorough consideration of the nascent area of Human-Centered Explainable Artificial Intelligence (XAI), concentrating on the crucial task of ensuring that AI decisions are understandable and credible to human users. With the spread of AI across sensitive domains like healthcare, finance, and online retail, the need for clear and understandable explanations increases. The review considers different formats of explanation such as visual aids (saliency maps, textual summaries) and analysis challenges faced in the evaluation process. Main finding: The research interest has also radically changed since 2021 from focusing on purely technical approaches to more on human perception, interaction and trust. We combine results of 73 published papers that exist until the year of 2024 from empirical research and show that local post-hoc explanation (particularly feature importance methods [e.g., LIME, SHAP]) is the current focus of much of the literature but that inherently interpretable models are treated with relatively little attention. Despite the large pool of explanation techniques, there is a dearth of standardized metrics to evaluate interpretability, user confidence, and impact on decision making. This gap restricts comparability of evidences between studies and hampers efforts to bring about efficient and user-friendly AI explanations. The paper calls for structured frameworks as well as a harmonized protocol for analyzing explainability - specifying how explanatory explanations would lead to greater user trust, understanding and support in makingdecisions. Ultimately, a humane, rigorous approach towards evaluating AI systems is necessary to not only make these transparent but also make them really understandable on the part of the reader. The goal of this work, in turn, is to drive further exploration to more trustworthy, human-centered modes of explanation that will bridge that the chasm between the complexity of algorithms and human understanding.