Representation Mechanics: Invariant-Governed Learning Dynamics

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We present Representation Mechanics, a theoretical and empirical framework characterizing learning in invariant-governed domains. We demonstrate that neural networks with set-valued operations (discrete potentials {-W, 0, +W}) and learned selection exhibit sharp phase transitions to generalization without pretraining. The Platonic Spike (Early Phase Transition)—where validation accuracy exceeds training accuracy in early epochs—signals the discovery of structural invariants before instance memorization. We validate these findings on controlled synthetic tasks (spiral classification, closed-world logic, declarative language) to isolate geometric mechanics from natural language statistical confounds. We contrast this Form-First paradigm with diffusion-dominated learning and define the conditions (Basin Preservation and Friction Set tracking) required to maintain these invariants under continual learning pressure. Our results suggest that restricting representational freedom via discrete potentials is not a limitation but a necessary inductive bias for rule-consistent generalization.

Article activity feed