Enhancing Efficiency and Regularization in Convolutional Neural Networks: Strategies for Optimized Dropout

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background/Objectives: Convolutional Neural Networks (CNNs), while effective in tasks such as image classification and language processing, often experience overfitting and inefficient training due to static, structure-agnostic regularization techniques like traditional dropout. This study aims to address these limitations by proposing a more dynamic and context-sensitive dropout strategy. Methods: We introduce Probabilistic Feature Importance Dropout (PFID), a novel regularization method that assigns dropout rates based on the probabilistic significance of individual features. PFID is integrated with adaptive, structured, and contextual dropout strategies, forming a unified framework for intelligent regularization. Results: Experimental evaluation on standard benchmark datasets including CIFAR-10, MNIST, and Fashion MNIST demonstrated that PFID significantly improves performance metrics such as classification accuracy, training loss, and computational efficiency compared to conventional dropout methods. Conclusions: PFID offers a practical and scalable solution for enhancing CNN generalization and training efficiency. Its dynamic nature and feature-aware design provide a strong foundation for future advancements in adaptive regularization for deep learning models.

Article activity feed