An Automated Multimodal Medical Image Fusion Framework for Alzheimer Detection using Deep Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Alzheimer's disease (AD) is a progressive neurodegenerative condition that affects the elderly population. The early detection and diagnosis of AD is critical for achieving effective treatment, as it can greatly improve the patient experience. AD can be viewed through imaging techniques like MRI, PET, and SPECT, providing valuable information about structural and functional changes. These findings are important in understanding this area. However, each imaging modality offers a different perspective. This information can be better collected from several of the other modalities as well as from some others to improve accuracy and reliability in AD detection. By combining information from different imaging modalities, such as MRI, PET, DTI, and fMRI, automated multimodal medical image frameworks aim to create a fused representation that preserves the relevant features from each modality. Convolutional neural networks (CNNs) and generative adversarial networks (GANs), among other deep learning techniques, have been prevalent in these frameworks for learning discriminative and informative features from multi-modal data. In this paper, The Alzheimer's Disease Neuroimaging Initiative (ADNI) is used for experimental analysis. The proposed work gives 98.94% of accuracy and 1.06% of error which is greater than the existing approaches.

Article activity feed