Interpreting Sleep Activity Through Neural Contrastive Learning
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Memories are spontaneously replayed during sleep, a process thought to support memory consolidation. However, capturing this replay in humans has been challenging because unlike wakefulness, sleep EEG is dominated by slow, rhythmic background activity. Moreover, each sleep stage (e.g., NREM, REM) has distinct rhythms, hindering generalisation of models trained on wake-state data. To overcome these challenges, we developed the Sleep Interpreter (SI), a neural network model that decodes memory replay from sleep EEG. In a large dataset comprising 135 participants (∼1,000 h of overnight sleep; ∼400 h of wake), we employed a TMR-like paradigm with 15 semantically congruent cue-image pairs to tag specific memories. SI was trained separately for NREM and REM using contrastive learning to align neural patterns across wake and sleep, filtering out stage-specific background rhythms. We also examined how slow oscillations and spindle coupling influence decoding in NREM sleep. In a 15-way classification, SI achieved up to 40.02% Top-1 accuracy on unseen subjects. To test generalisability, we followed up with two independent nap experiments in separate samples and applied the trained SI model off-the-shelf. The first probed spontaneous reactivation without auditory cues, while the second used semantic-free sounds with new images. In both, SI successfully decoded reactivation during sleep that correlated with post-nap memory performance. By openly sharing our dataset and SI system, we provide a unique resource for advancing research on memory and learning during sleep, and related disorders.