Video Analysis Based Effective Multi-Facial Emotion Recognition and Classification Framework Using Gcrcnn

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Facial emotions are the varying expressions of a person's face that communicate one's feelings and moods. Facial emotion in videos can be detected using techniques that analyze keyframes for facial muscle movements and patterns. However, these detections can be challenging due to potential simultaneous expressions and camera angle complexities. To overcome these pitfalls, this paper provides a practical framework for detecting facial emotions in videos. Firstly, the input key frames are pre-processed by MF and IN algorithms to acquire an enhanced image. Secondly, human detection and tracking occur using YOLOV7 and BYTE tracking algorithms. Then, the T-SNEVJ algorithm is used for face detection. Thirdly, facial landmark extraction using the HC technique, mesh generation, and feature extraction are done. Here, ED-SVR is utilized for mesh generation. In the meantime, feature point tracking followed by motion analysis is done using CC_OF. Finally, the GCRCNN algorithm classifies multi-facial emotions. The proposed system achieves a better accuracy and recall of 99.34% and 99.20%. Thus, the proposed methodology outperforms the existing FER techniques.

Article activity feed