CNN in Neural Networks for Image-based Face Emotion Identification on Recognition Datasets
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Because facial expressions can vary greatly, it can be challenging to identify emotions from face photographs. Prior studies on the use of deep learning models for facial image emotion classification have been conducted on a variety of datasets with a restricted range of expressions. The Recognition dataset, which contains ten target emotions—amusement, awe, enthusiasm, liking, surprise, anger, contempt, fear, sorrow, and neutral—is used in this work to extend the application of deep learning for facial emotion recognition (FER). To transform video data into photos and enhance the data, a number of data preparation steps were taken. This paper suggests two methods for creating Convolutional Neural Network (CNN) models: transfer learning (fine-tuning) with pre-trained Inception V3 and Mobile Net V2 models and starting from scratch using the Taguchi technique to determine. In order to establish a reliable combination of hyperparameter settings, this study suggests two methods for developing Convolutional Neural Network (CNN) models: transfer learning (fine-tuned) with pre-trained models of Inception V3 and Mobile Net V2, and building from scratch using the Taguchi technique. With an accuracy and an average F1-score of 96% and 0.95, respectively, on the test data, the suggested model showed good performance across a number of experimental procedures.