An Improved Facial Emotion Recognition System Using Convolutional Neural Network for the Optimization of Human Robot Interaction
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Artificial intelligence (AI) has been effectively augmenting the features of robotics applications, including surveillance, medical support, aid services for the elderly or disabled, and many more uses. Most robotics applications need a variety of human-robot interactions (HRI) for effective and accurate use. Computer vision plays a vital role in improved and precise HRI. The human facial emotion recognition (FER) technique is one of the important types of computer vision that is crucial for the enhancement of HRI. This paper presents a study that uses algorithms based on computer vision and machine learning (ML) to identify the emotional states of humans in photographs and videos while users interact with various visual objects. Using ML techniques and a digital image processing pipeline, this study aims to reveal the development of software that can recognize emotions in human gestures. This research shows that software creation and training based on emotional expressions, together with the use of convolutional neural networks (CNNs) for emotional identification, are feasible. The main objective of this study is to find the essential facial gestures that the CNN framework with FER emphasizes. Finally, the proposed model is comparatively analyzed against FER2013, the Real-world Affective Faces Database (RAF-DB), and CK+ datasets. The rate of accuracy is highest for the CK+ dataset, which is around 95% and lowest for the FER2013 dataset, which is around 64%. The findings of this research will advance our knowledge about neural networks and help enhance the efficiency of computer vision.