Enhancing AI Transparency for Human Understanding: A Comprehensive Review

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

One of the most hotly debated topics in technology is the transparency between AI models and humans. As artificial intelligence (AI) continues to permeate various sectors, the demand for transparency in AI decision-making has become increasingly critical. This paper presents a comprehensive review of Explainable Artificial Intelligence (XAI), examining 57 key studies that focusing on various explanation approaches and their impact on the trust and accountability of end-users. Recognizing the obstacles resulting from the black-box nature of AI models, this work focuses on the need for the proper methods that can be used in the explanation process, enabling both people and AI models to work together. The findings highlight the importance of XAI in enhancing trust, particularly in complex environments such as healthcare and finance, and propose directions for future research to further develop reliable and interpretable AI solutions.

Article activity feed