Emotion Recognition from EEG Signals Using Sub-Band Coding and LSTM Networks: A Focus on Alpha Band
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Deep neural networks have recently gained significant attention in the classification of human emotions, particularly using electroencephalogram (EEG) signals. EEG offers a non-invasive method for capturing neural activity associated with cognitive and emotional states. However, the inherent non-linearity and noise in raw EEG data necessitate robust preprocessing to enhance the reliability and accuracy of classification models. The proposed study employs a comprehensive preprocessing pipeline that includes re-referencing and band-pass filtering to eliminate interference and isolate relevant frequency bands, thereby improving the quality of the EEG signals. The preprocessed signals are subsequently decomposed into distinct frequency bands, namely delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (14–30 Hz), gamma (30–44 Hz), and phi (45–60 Hz) using principles inspired by sub-band coding. Specifically, the coefficients of alpha band is then utilized as input features to train a Long Short-Term Memory (LSTM) network, selected for its capacity to effectively capture temporal dynamics in sequential data. The proposed model predicts four dimensions of emotional states: valence, arousal, dominance, and liking. To evaluate the model's performance, metrics such as Mean Squared Error (MSE) and Mean Absolute Error (MAE) are employed, providing a robust assessment of its predictive accuracy. The approach achieves an overall accuracy of 93%, with accuracies of 92% for valence and arousal and 93% for dominance and liking. These findings highlight the efficacy of integrating rigorous preprocessing methods and frequency band decomposition with advanced deep learning architectures for emotion recognition using EEG data.