The utility of explainable A.I. for MRI analysis: Relating model predictions to neuroimaging features of the aging brain
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Introduction
Deep learning models highly accurately predict brain-age from MRI but their explanatory capacity is limited. Explainable A.I. (XAI) methods can identify relevant voxels contributing to model estimates, yet, they do not reveal which biological features these voxels represent. In this study, we closed this gap by relating voxel-based contributions to brain-age estimates, extracted with XAI, to human-interpretable structural features of the aging brain.
Methods
To this end, we associated participant-level XAI-based relevance maps extracted from two ensembles of 3D-convolutional neural networks (3D-CNN) that were trained on T1-weighted and fluid attenuated inversion recovery images of 2016 participants (age range 18-82 years), respectively, with regional cortical and subcortical gray matter volume and thickness, perivascular spaces (PVS) and water diffusion-based fractional anisotropy of main white matter tracts.
Results
We found that all neuroimaging markers of brain aging, except for PVS, were highly correlated with the XAI-based relevance maps. Overall, the strongest correlation was found between ventricular volume and relevance ( r = 0.69), and by feature, temporal-parietal cortical thickness and volume, cerebellar gray matter volume and frontal-occipital white matter tracts showed the strongest correlations with XAI-based relevance.
Conclusion
Our ensembles of 3D-CNNs took into account a plethora of known aging processes in the brain to perform age prediction. Some age-associated features like PVS were not consistently considered by the models, and the cerebellum was more important than expected. Taken together, we highlight the ability of end-to-end deep learning models combined with XAI to reveal biologically relevant, multi-feature relationships in the brain.