Explainable artificial intelligence for neuroimaging-based dementia diagnosis and prognosis

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

INTRODUCTION: Artificial intelligence and neuroimaging enable accurate dementia prediction, but ‘black box’ models can be difficult to trust. Explainable artificial intelligence (XAI) describes techniques to understand model behaviour and the influence of features, however deciding which method is most appropriate is non-trivial. Vision transformers (ViT) have also gained popularity, providing a self-explainable, alternative to traditional convolutional neural networks (CNN). METHODS: We used T1-weighted MRI to train models on two tasks: Alzheimer’s disease (AD) classification (diagnosis) and predicting conversion from mild-cognitive impairment (MCI) to AD (prognosis). We compared ten XAI methods across CNN and ViT architectures. RESULTS: Models achieved balanced accuracies of 81% and 67% for diagnosis and prognosis. XAI outputs highlighted brain regions relevant to AD and contained useful information for MCI prognosis. DISCUSSION: XAI can be used to verify that models are utilising relevant features and to generate valuable measures for further analysis.

Article activity feed