Interpreting Sleep Activity Through Neural Contrastive Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Memories are spontaneously replayed during sleep, but capturing this process in the human brain has been challenging due to the dominance of slow, rhythmic background activity in sleep, which differs significantly from wakefulness. Each sleep stage, such as NREM and REM, has distinct rhythms, making it even harder for models trained on awake tasks to generalise and decode memory replay during sleep. To overcome this, we developed the Sleep Interpreter (SI), an artificial neural network. We first collected a large EEG dataset from 135 participants, recording brain activity during both awake tasks and overnight sleep. Using a Targeted Memory Reactivation (TMR) technique with 15 pairs of auditory cues and visual images, we tracked when specific memories were reactivated during sleep. The SI model was then trained separately for NREM and REM stages, using contrastive learning to align neural patterns between wakefulness and sleep while filtering out the background rhythms that previously hindered decoding. We also examined how specific sleep rhythms, such as slow oscillations and their coupling with spindles, influenced decoding performance. In a 15-way classification task during sleep, our model achieved a Top-1 accuracy of up to 40.05% on unseen subjects, surpassing all other available decoding models. Finally, we developed a real-time sleep decoding system by integrating an online automatic sleep staging process with the SI model for each sleep stage. This ability to decode brain activity during sleep opens new avenues for exploring the functional roles of sleep. By making our dataset and decoding system publicly available, we provide a valuable resource for advancing research into sleep, memory, and related disorders.

Article activity feed