Emotion Recognition on Speech using Hybrid Model CNN and BI-LSTM Techniques

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Speech emotion recognition is critical for many applications such as human-computer interactions and psychological analysis. Due to the inability of conventional models to capture the subtle nuance of emotional speech variations, the identification process is less effective. The development of a new hybrid model in this study presents a solution to address this problem through combining the Convolutional Neural Networks and Bidirectional Long Short-Term Memory. The combination of feature extraction and temporal context abilities is a unique value for the model. The study model led to outstanding performance reached 98.48% accuracy, 97.25% precision, 98.29% recall, and an F1-Score of 97.39%. The latter performance surpassed those of other models such as PNN model 95.56%, LSTM model 97.1%, 1-D DCNN model 93.31%, GMM model 74.33%, and Deep Learning Transfer Models 86.54%. The developed hybrid model can accurately detect and classify emotions and speech and can effectively work in real applications.

Article activity feed