XAI-based Data Visualization in Multimodal Medical Data

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Explainable Artificial Intelligence (XAI) is crucial in healthcare as it helps make intricate machine learning models understandable and clear, especially when working with diverse medical data, enhancing trust, improving diagnostic accuracy, and facilitating better patient outcomes. This paper thoroughly examines the most advanced XAI techniques used in multimodal medical datasets. These strategies include perturbation-based methods, concept-based explanations, and example-based explanations. The value of perturbation-based approaches such as LIME and SHAP in explaining model predictions in medical diagnostics is explored. The paper discusses using concept-based explanations to connect machine learning results with concepts humans can understand. This helps to improve the interpretability of models that handle different types of data, including electronic health records (EHRs), behavioural, omics, sensors, and imaging data. Example-based strategies, such as prototypes and counterfactual explanations, are emphasised for offering intuitive and accessible explanations for healthcare judgments. The paper also explores the difficulties encountered in this field, which include managing data with high dimensions, balancing the tradeoff between accuracy and interpretability, and dealing with limited data by generating synthetic data. Recommendations in future studies focus on improving the practicality and dependability of XAI in clinical settings.

Article activity feed