Research on Emotion Recognition Model Based on ConvTCN-LSTM-DCAN Model with Sparse EEG Channels
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Emotion recognition holds significant importance in the fields of artificial intelligence and human-computer interaction. Electroencephalogram (EEG) signals, which directly reflect brain activity, have become a vital tool in emotion recognition research. However, the low-dimensional data from sparse EEG channels presents a key challenge in extracting effective features. This paper proposes an emotion recognition model named ConvTCN-LSTM-DCAN, which integrates an improved Temporal Convolutional Network (ConvTCN), Long Short-Term Memory network (LSTM), and a custom Dynamic Convolutional Attention Network (DCAN) to enhance the accuracy of emotion recognition. The model extracts features and classifies emotions using only two EEG channels (Fp1 and Fp2) from the frontal region. Despite the reduction in the number of channels, the model still achieves classification performance comparable to that of multi-channel models. Experimental results show that on the DEAP dataset, the model achieves a classification accuracy of 96.14% for the arousal dimension, 96.47% for the valence dimension, and 95.23% for the combined arousal-valence classification; on the SEED dataset’s three-class classification task, the accuracy is 93.17%. These results not only surpass traditional emotion recognition models but also demonstrate that the proposed method can achieve performance comparable to multi-channel models even with sparse EEG channels.