Performance Analysis of Hybrid Deep-Transfer Learning Approaches with Machine Learning Methods for Face Recognition

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study addresses the pressing challenge of supervised face recognition under unconstrained conditions by systematically integrating classical machine learning, deep learning, and transfer learning approaches. While existing literature demonstrates significant progress, particularly with hybrid and transfer learning models, a gap remains in unified, detailed benchmarking across diverse techniques. The primary objective is to holistically compare Support Vector Machines (SVM) with PCA, Artificial Neural Networks (ANN), XGBoost, a custom Convolutional Neural Network (CNN), MobileNetV2-based transfer learning, and stacked hybrid meta-models using the Labeled Faces in the Wild (LFW) dataset. The methodology encompasses data preprocessing, parallel feature extraction, dimensionality reduction, ensemble learning, and interpretability analysis. Experimental results show that stacking hybrid models achieves the highest test accuracy (87.9%) and macro ROC-AUC (0.983), with MobileNetV2 transfer learning also excelling in sample efficiency and performance. Future research should expand interpretability diagnostics and benchmark these pipelines on more diverse or occluded datasets for greater real-world applicability.

Article activity feed