Comparative Analysis of Explainable AI Methods Across Machine Learning Classifiers for Breast Cancer Detection
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Breast cancer has been one of the leading causes of death among women in the world and the importance of early and precise detection of the disease has been of utmost importance in the world. Machine learning models have been proven to be highly predictive when applied in medical diagnosis; their inability to offer interpretability can however act as a barrier when it comes to their use in clinical practice. Explainable Artificial Intelligence (XAI) systems, including SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations) have become one of the most effective to combat the issue. This work provides a comparative study of SHAP and LIME on two popular classifiers- Logistic Regression and random forest on the Wisconsin Breast Cancer data set. Along with measuring the performance of a model, the paper presents a quantitative measure of agreement to evaluate the reliability of SHAP and LIME in the presence of important features. The experimental findings demonstrate that the two approaches reach a great degree of consensus (approximately 77 percent), which would lead to consistent and high-quality explanations, and slight deviations can be observed in more complicated models. These results indicate the complementary relationship between SHAP and LIME and their possibility in improving medical AI system transparency.