Representational drift shows same-class acceleration in visual cortex and artificial neural networks
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The neural code is not fixed but shows substantial representational drift over time. It has been proposed that representational drift reflects continuous learning resulting from input-dependent plasticity. Using theoretical analysis and simulations in artificial neural networks, we show that input-dependent plasticity entails a same-class-acceleration principle: Representational drift for a given class of stimuli is predominantly caused by presenting that class of stimuli, rather than presenting other classes of stimuli. We analyze electrophysiological recordings of mouse visual cortex to examine whether within-session representational drift is consistent with this principle. Within-session representational drift was not explained by changes in behavioral state. Instead, it showed a systematic temporal structure that reflected sensory experience rather than behavior. Drift for a given set of stimuli accelerated during blocks presenting that set of stimuli and slowed down during blocks in which other stimuli were presented. Thus, sensory inputs, not elapsed time, organize representational drift in a stimulus-specific manner, consistent with theoretical predictions for training neural networks.