A theory and recipe to construct general and biologically plausible integrating continuous attractor neural networks
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Across the brain, circuits with continuous attractor dynamics underpin the representation and storage in memory of continuous variables for motor control, navigation, and mental computations. The represented variables have various dimensions and topologies (lines, rings, euclidean planes), and the circuits exhibit continua of fixed points to store these variables, and the ability to use input velocity signals to update and maintain the representation of unobserved variables, effectively integrating the incoming velocity signal. Integration constitutes a general computational strategy that enables variable state estimation when direct observation of the variable is not possible, suggesting that it may play a critical role in other cognitive processes. While some neural network models for integration exist, a comprehensive theory for constructing neural circuits with a given topology and integration capabilities is lacking. Here, we present a theoretically-driven design framework, Manifold Attractor Direct Engineering (MADE), to automatically, analytically, and explicitly construct biologically plausible continuous attractor neural networks with diverse user-specified topologies. We show how these attractor networks can be endowed with accurate integration functionality through biologically realistic circuit mechanisms. MADE networks closely resemble biological circuits where the attractor mechanisms have been characterized. Additionally, MADE offers innovative and minimal circuit models for uncharacterized topologies, enabling a systematic approach to developing and testing mathematical theories related to cognition and computation in the brain.