Contrastive Representation Learning withTransformers for Robust Auditory EEG Decoding

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Decoding of continuous speech from electroencephalography (EEG) presents a promising avenue for understanding neuralmechanisms of auditory processing and developing applications in hearing diagnostics. Recent advances in deep learning haveimproved decoding accuracy, but challenges remain due to the low speech-to-noise ratio of the recorded brain signals. Thisstudy explores the application of contrastive learning, a self-supervised learning technique, to learn robust latent representationsof EEG signals. We introduce a novel model architecture that leverages contrastive learning and transformer networks tocapture relationships between auditory stimuli and EEG responses. Our model is evaluated on two tasks from the ICASSP2023 Auditory EEG Decoding Challenge: match-mismatch classification and stimulus envelope regression. We achievestate-of-the-art performance on both tasks, significantly outperforming previous winners with 87% accuracy in match-mismatchclassification and a 0.176 Pearson correlation in envelope regression. Furthermore, we investigate the impact of modelarchitecture, training set size, and finetuning on decoding performance, providing insights into the factors influencing modelgeneralizability and accuracy. Our findings underscore the potential of contrastive learning for advancing the field of auditoryEEG decoding and its potential applications in clinical settings.

Article activity feed