The Spiraling Intelligence Thesis: Intelligence as a Bounded Non-Convergent Trajectory

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Most deployed AI systems follow a Train-Freeze-Deploy lifecycle: parameters are optimized offline and then served as a static checkpoint until a new training cycle produces a replacement. This design assumes intelligence can be captured as a fixed point in parameter space, making continual adaptation brittle under distribution shift. This paper advances a different thesis: intelligence is better modeled as a bounded trajectory than as a convergent point. The central object is not a final parameter vector W∗ but an evolving state W(t) whose identity is its history. This study proposes the Spiraling Intelligence Architecture (SIA) as a concrete instantiation, grounded in the Infinite Transformation Principle (ITP): irreversible, history-dependent evolution with recurrent revisitation and self-maintenance. The core mechanism combines (i) Rotational Hebbian Learning (RHL), a drift-inducing complex-valued plasticity rule that separates memories by phase, and (ii) an Autopoietic Sleep Cycle that reorganizes the internal structure without external labels. Through a minimal, reproducible toy simulation, the paper demonstrates the qualitative signature implied by the thesis: under distribution switching, a spiraling learner exhibits bounded non-convergence and recurrent re-alignment peaks for an earlier task, exceeding a convergent baseline that relaxes to a static compromise. The empirical scope is intentionally modest; the contribution is a falsifiable theoretical framing and a minimal mechanism that exhibits the predicted qualitative behaviour.

Article activity feed