Locally balanced inhibition allows for robust learning of input-output associations in feedforward networks with Hebbian plasticity
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In neural networks within the brain, the activity of a post-synaptic neuron is determined by the combined influence of many pre-synaptic neurons. This distributed processing enables mechanisms like Hebbian plasticity to associate sensory inputs with specific internal states, as seen in feedforward structures such as the CA1 region of the hippocampus. By modifying synaptic weights through Hebbian rules, sensory inputs can subsequently elicit outputs that consistently reflect their corresponding internal states. When input and output patterns are uncorrelated, this approach allows for the encoding of a large number of distinct associations, enabling efficient memory storage. Our study demonstrates a critical limitation when output patterns become weakly correlated with input patterns through the intrinsic feedforward network's connectivity. In these cases, the Hebbian rule preferentially strengthens synaptic weights shared across patterns, leading to a "freezing" of the network's structure. This results in highly correlated output patterns over time, effectively reducing the network's capacity to store diverse associations and limiting its flexibility in learning. To address this challenge, we find that including a mechanism of locally balanced inhibition, which has been shown to be a key feature of cortical circuits in-vivo, counteracts the undesired correlations between inputs and outputs. By dynamically regulating inhibitory input, locally balanced inhibition prevents the over-strengthening of shared weights, restoring the network's ability to maintain robust and flexible learning. This finding underscores the importance of inhibitory mechanisms in enabling efficient and adaptive information processing in neural circuits, offering insights into how biological networks maintain their remarkable capacity for associative learning.