Spectral Regularization Dynamics]{Spectral Regularization Dynamics: A Continuous-Time Framework for Non-Convex Optimization

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Non-convex optimization is central in machine learning and scientific computing, yet traditional methods often falter at saddle points. This paper introduces Spectral Regularization Dynamics (SRD), a novel continuous-time framework that leverages second-order information to address these challenges. SRD models the optimization trajectory as a system of ordinary differential equations (ODEs) with an autonomous control mechanism that adjusts regularization based on the Hessian?s minimum eigenvalue, ensuring a descent direction. We prove global convergence to critical points using LaSalle?s invariance principle and demonstrate, via the stable manifold theorem, that SRD almost surely avoids saddle points and local maxima. Near strict local minima, it recovers the fast convergence of Newton?s method. Numerical experiments on benchmark problems validate these claims, positioning SRD as a robust theoretical foundation for optimization.

Article activity feed