Explainable Eeg for Auditory Attention Decoding
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Human communication involves the simultaneous processing of sounds. In order to function in various complex auditory conditions, the auditory system distinguishes sound sources by reducing the impact of ambient noise. Auditory attention decoding (AAD) uses brain signals to identify conversations being listened to in an environment where several people are speaking simultaneously. Our goal was to use electroencephalogram (EEG) signals for AAD in order to predict the sound being listened to based on neural activity in the auditory cortex with deep learning models. The EEG signals are converted into graphs and then presented at the input of the ResNet-101 architecture. We obtained an AUC value of 94.6% with data from 4 participants and 87.7% with data from 18 participants. We then evaluated the model using the GradCAM technique to understand which characteristics and types of EEG components are used for AAD. Finally, auditory attention is elucidated through explainability. This study shows that alpha, beta and sensorimotor rhythms are more or less intense depending on the type of speech. Although preliminary, this study shows the potential of combining an explainability to increase the robustness of EEG source analysis for AAD.