A Review on Empirical Studies in Explainable Artificial Intelligence
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As artificial intelligence (AI) systems become more integrated into decision-making processes, the need for explainability has emerged to foster trust, understanding, and effective human-AI collaboration. With the wide variety of explainable AI (XAI) methods available, selecting the right one for a specific user group and use case remains challenging, especially given the limited empirical validation of existing theoretical guidance. This paper presents the first systematic literature review aimed at identifying the influence of specific explanations on user outcomes across diverse settings. The review focuses on the human-grounded evaluation of XAI methods to accomplish this goal. Moving beyond high-level taxonomies, we classify XAI methods along multiple dimensions, such as scope and output types. Based on this classification, we analyze how the properties of XAI methods affect varying user groups, tasks, and domains. Our findings underscore the necessity of context-aware method selection, as the effectiveness of XAI methods varies significantly across use cases. Moreover, our analysis reveals imbalances in the existing empirical landscape, where certain methods and user groups are overrepresented while others are largely overlooked. By validating theoretical proposals with empirical evidence, this review provides actionable guidance for selecting XAI methods that are aligned with user needs and use case demands, paving the way for more targeted and effective human-AI interaction.