A Controlled Multi-dataset Evaluation of Custom CNNs, Pretrained Feature Extractors, and Transfer Learning

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Convolutional neural networks (CNNs) are commonly trained using one of three paradigms: training from scratch, reuse of pretrained representations, or trans- fer learning via fine-tuning. While each strategy is widely adopted, their relative effectiveness depends strongly on dataset characteristics, computational con- straints, and deployment requirements. This paper presents a controlled and reproducible comparison of these three training paradigms across five real-world image classification datasets spanning infrastructure inspection, agricultural dis- ease recognition, and object-centric classification. All models are trained and evaluated under identical data splits, preprocessing pipelines, optimization set- tings, and evaluation metrics. Performance is reported using macro-averaged accuracy, precision, recall, and F1-score, alongside training time and model com- plexity. Rather than proposing new network architectures, this work emphasizes controlled evaluation and reproducibility to support evidence-based selection of CNN training strategies across diverse application domains.

Article activity feed