Meta-Regularization Selection: An Optimization Framework for Dynamic Regularization in Deep Neural Networks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In this paper, we propose a novel optimization framework for dynamic regularization in deep neural networks, termed meta-regularization selection. Traditional regularization techniques often impose fixed constraints that can hinder model adaptability and generalization. Our approach addresses this limitation by formulating meta-regularization selection as a nested bi-level optimization problem, allowing both the type and strength of regularization strategies to be continuously optimized throughout training. We rigorously establish the mathematical foundations of this framework, ensuring its existence and convergence under specific conditions. Our extensive experimental validation demonstrates that the proposed method significantly enhances generalization performance across various benchmarks compared to static and hand-tuned regularization approaches. By enabling real-time adaptive adjustments to regularization techniques, our framework not only improves model robustness but also unlocks new potentials for efficient learning in complex environments. This research represents a significant step forward in deep learning, offering insights into the dynamic interplay between model parameters and regularization strategies that can lead to better-performing neural networks.

Article activity feed