Explainable Artificial Intelligence (XAI): Investigating Methods to Make AI Algorithms More Interpretable and Transparent

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rapid advancement of artificial intelligence (AI) technologies has heralded transformative changes across various domains, from healthcare to finance. However, the increasing complexity of AI systems, particularly deep learning models, often results in opaque decision-making processes that are challenging for humans to interpret and trust. Explainable Artificial Intelligence (XAI) emerges as a critical field aimed at enhancing the interpretability and transparency of AI models. This review explores the state-of-the-art methods in XAI, categorizing them into post-hoc interpretability techniques and inherently interpretable models. We examine methods such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (Shapley Additive Explanations), which provide post-hoc insights into existing models. Additionally, we discuss inherently interpretable approaches, such as decision trees and rule-based learners, that are designed to be understandable from inception. The review also addresses key challenges and future directions in XAI, emphasizing the need for a delicate balance between model accuracy and interpretability. Furthermore, we explore case studies that demonstrate the applicability of XAI techniques in real-world scenarios, underscoring their potential to ensure ethical and responsible AI deployment. The overall goal is to provide a comprehensive understanding of XAI methodologies, their current limitations, and the opportunities they present for building trustworthy AI systems.

Article activity feed