A geometric dissipation bound on the lifespan of gradient-based adaptation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The stability of adaptive learning systems is commonly attributed to algorithmic design choices, such as optimization strategies, architectural features, or regularization schemes. However, the physical constraints governing the sustained operation of gradient-based adaptation remain poorly understood. Here we identify a finite geometric dissipation budget that bounds the functional lifespan of gradient-based adaptive systems. By tracking the cumulative geometric displacement of parameters during optimization, we show that system collapse is not primarily determined by specific hyperparameters or model architectures, but instead correlates with the exhaustion of this dissipation budget. Across multiple architectures and optimization algorithms, we observe a consistent dissipation horizon: while learning dynamics may differ in speed, the total geometric path length accumulated at collapse remains bounded. We further demonstrate that exhaustion of this geometric budget coincides with a divergence in the Hessian spectral radius, indicating a loss of adaptive plasticity and a transition into a dynamically constrained regime. Although our analysis employs an operational Euclidean proxy for geometric dissipation, we argue that the existence of a finite adaptive horizon reflects a structural property of the learning landscape rather than a coordinate artifact. These findings establish a quantitative physical limit on continual adaptation and suggest that sustained learning requires mechanisms that actively regulate or reset geometric dissipation, which are absent in current gradient-based artificial systems.

Article activity feed