Classifying brain activations during movie-watching using 3D Convolutional Neural Networks
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Neuroimaging research provides us with a rich amount of information regarding localization of brain activity. However, the noise sensitivity of traditional analysis methods limits the type of stimuli that can be used inside the scanner as well as the experimental designs that are feasible to perform. When subjects watch scenes from a movie, there is a high level of signal variability captured in fMRI images which tends to be reduced by traditional statistical methods. Convolutional neural networks (CNNs) are powerful tools that can leverage this variability to find patterns that traditional methods may overlook. Furthermore, CNNs have exhibited strong predictive capabilities in processing MRI data in different contexts. We propose extending the application of these models to more complex datasets. We use CNNs with different configurations to classify the activation patterns of participants watching different movie scenes. We then assess the overall model performance and summarize the signal present in the correctly classified fMRI images indicating regions of importance detected by the CNN. Finally, we map the salient regions with anatomical and functional brain atlases to interpret the underlying cognitive processes. Our empirical approach identifies the image features that the models deem relevant during classification and promotes the extension of this implementation to other experimental paradigms in neuroimaging.