LightEmoNet: A Lightweight Multiscale Knowledge Distillation Framework for Subject-Independent EEG-Based Emotion Recognition

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Emotion recognition from electroencephalogram (EEG) signals constitutes a core challenge in affective computing. Nevertheless, the non-stationary nature of EEG signals and substantial inter-subject variability hinder the development of generalizable models. Most existing approaches rely on subject-dependent training, entail high computational costs, and often overlook the importance of multi-scale temporal structures. In this study, we propose a lightweight three-stage knowledge distillation framework tailored for subject-independent EEG-based emotion recognition. Our architecture consists of a high-capacity teacher, an intermediate assistant model, and a compact student network, facilitating effective knowledge transfer across multiple scales. To model emotional dynamics at varying temporal resolutions, feature distillation is conducted at both shorter and longer time scales. Comprehensive experiments on two benchmark datasets, DEAP and HCI-Tagging, demonstrate that our framework consistently outperforms strong baseline methods, achieving statistically significant gains in both accuracy and F1-score. Notably, our model attains approximately 60% accuracy under subject-independent settings. These findings underscore the practical value of structured knowledge distillation in enabling efficient and generalizable EEG-based emotion recognition for real-world BCI applications.

Article activity feed