Comparative Deep Learning Analysis of Regularization Techniques on Generalization in Baseline CNNs and ResNet Architectures for Machine Learning-Based Image Classification

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study addresses the persistent challenge of overfitting in deep learning-based image classification, emphasizing the need for robust regularization strategies as models become increasingly complex. To meet the research need for systematic, quantitative comparisons of regularization techniques across different architectures, the objective was to evaluate how methods such as dropout and data augmentation impact generalization in both baseline CNNs and ResNet-18 models. The methodology involved controlled experiments using the Imagenette dataset at varying resolutions, with consistent application of regularization, early stopping, and transfer learning protocols. Results show that ResNet-18 achieved superior validation accuracy (82.37%) compared to the baseline CNN (68.74%), and that regularization reduced overfitting and improved generalization across all scenarios. Transfer learning further enhanced performance, with fine-tuned models converging faster and attaining higher accuracy than those trained from scratch. Future research should explore the interplay between emerging regularization methods and novel architectures, as well as their effectiveness in transfer learning and resource-constrained environments, to further advance the reliability and efficiency of deep learning systems for image classification.

Article activity feed