Explainable AI in Healthcare: Interpreting Machine Learning’s Dementia Classification On Imbalanced Multi-Domain Clinical Data
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Dementia is a group of detrimental diseases and conditions that damage the human brain and impair mental functioning. In recent years, significant efforts have been made to develop robust computer-aided diagnostic tools, particularly AI models for detection. However, their clinical implementation remains limited. One issue identified is that despite achieving high accuracy, AI-powered predictions have relied on single features or small sets of features in limited categories, hindering their robustness and generalizability. Moreover, the accuracy alone has not been sufficient to convince medical practitioners to adopt these tools, as they require transparent and scientifically grounded diagnostic processes. This study aims to improve the transparency and reliability of AI models in predicting dementia through three key steps: 1) compare a Machine Learning (ML) model's accuracy in detecting dementia when trained on a multi-domain dataset to when trained on subsets of features, 2) evaluate additional ML models for dementia detection using the multi-domain dataset, and 3) apply Explainable AI (XAI) to elucidate the importance of specific features in ML-based predictions, thereby enhancing transparency and interpretability. We hypothesize that utilizing multi-domain data will enhance the performance of ML models. Furthermore, XAI will demonstrate that features with high influence on the model's decisions are aligned with established indicators of dementia. This study contributes to the field by constructing lightweight predictive models that deliver highly accurate performance while enhancing transparency and interpretability, advancing toward clinically applicable AI-powered diagnosis.