Understanding the Landscape: A Review of Explainable AI in Healthcare Decision-Making

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Breast cancer remains a significant global health concern, impacting millions of women. Early and accurate diagnosis is crucial for improving treatment outcomes and reducing mortality rates. Machine learning (ML) has emerged as a powerful tool for breast cancer prediction, demonstrating its ability to identify complex patterns and relationships in large datasets. This paves the way for efficient collaboration between AI and healthcare professionals. This systematic review explores the diverse machine-learning techniques employed in breast cancer diagnosis. We comprehensively analyse and evaluate the effectiveness of various computational methodologies by synthesising findings from a wide range of peer-reviewed studies. Our analysis highlights the substantial advancements achieved in utilizing machine learning algorithms for breast cancer prediction. However, challenges remain in harnessing the full potential of machine learning for healthcare. These include the need for larger and more diverse datasets, the effective incorporation of imaging data, and the development of interpretable models. While AI offers immense potential for improving healthcare, ensuring transparency, interpretability, and trust is crucial, especially in complex domains like cancer diagnosis. This research emphasizes the importance of Explainable AI (XAI) for enhancing clinical decision-making and building trust between patients and healthcare providers. We advocate for fostering interdisciplinary collaboration among AI researchers, medical professionals, ethicists, and policymakers to ensure the responsible integration of AI in healthcare.

Article activity feed