Recurrent dynamics underlying transient neural representations

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Brain networks are high-dimensional and interacting complex systems that exhibit substantial structural heterogeneity as well as temporal variability. Yet, when exposed to a stimulus, their recurrent circuits perform reliable computations. The mechanisms underlying this robustness are, however, still mostly unknown. Here, we combine analyses of Neuropixels recordings of awake, behaving mice with models and theory of recurrent neural networks to identify three core computational characteristics that emerge from the interplay of many network constituents and drive dynamic, reliably classifiable stimulus representations. We find that the level of recurrent inhibition in circuits and the microscopic chaos of dynamics respectively drive mean population responses as well as the within– and across-class stimulus response similarities. These core characteristics in turn non-trivially interact to predict and shape the experimentally observed separability of visual and tactile stimulus representations in mouse superior colliculus. Using these characteristics to assess the information transmitted through the network for multiple stimuli reveals a trade-off in coding space: increasing the number of stimuli conveys more information but also reduces their separability due to their larger overlap in the finitedimensional neuronal space. Our analysis predicts that, only for the experimentally observed small population activity, information keeps increasing with the number of stimuli, revealing another crucial advantage of sparse coding.

Article activity feed