The Topological Reinforcement Operator (ORT): A Parsimony Principle for Memory Consolidation in Complex Networks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Memory engram consolidation is a central challenge in computational neuroscience. This work introduces the Topological Reinforcement Operator (ORT), a post- training mechanism that reinforces topologically relevant nodes to induce functional engrams in complex networks. We validated the ORT using a robust functional protocol based on normalized diffusion and F1-score, applied to citation networks (Cora, Citeseer, Pubmed) and biological connectomes (macaque, human). The results reveal a dual consolidation principle: in information networks, memory resilience emerges from broad criti- cal mass cores (P90), whereas in optimized biological networks, a smaller elite core (P95) predominates, achieving a performance of up to 87.4% in the human connectome. Finally, we demonstrate that the ORT, based on degree centrality, is ∼96 times faster than PageRank, establishing a principle of computational parsimony that links structure, function, and efficiency in neural networks.

Article activity feed