Population sparseness determines strength of Hebbian plasticity for maximal memory lifetime in associative networks

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The brain can efficiently learn and form memories based on limited exposure to stimuli. One key factor believed to support this ability is sparse coding, which can reduce overlap between representations and minimize interference. It is well known that increased sparseness can enhance memory capacity, yet its impact on the speed of learning remains poorly understood. Here we analyze the relationship between population sparseness and learning speed — specifically, how the learning speed that maximizes memory capacity depends on the sparseness of the neural code, and how this in turn affects the network’s maximal capacity. To this end, we study a feedforward network with Hebbian and homeostatic plasticity and a two-state synapse model. The network learns to associate binary input-output pattern pairs, where sparseness corresponds to a small fraction of active neurons per pattern. The learning speed is modeled as the probability of synaptic changes during learning. The results presented in this manuscript are based on both network simulations and an analytical theory that predicts expected memory capacity and optimal learning speed. For both perfect and noisy retrieval cues, we find that the optimal learning speed indeed increases with increasing pattern sparseness — an effect that is more pronounced for input sparseness than for output sparseness. Interestingly, the optimal learning speed stays the same across different network sizes if the number of active units in an input pattern is kept constant. While the capacity obtained at optimal learning speed increases monotonically with output sparseness, its dependence on input sparseness is non-monotonic. Overall, we provide the first detailed investigation of the interactions between population sparseness, learning speed, and storage capacity. Our findings propose that differences in population sparseness across brain regions may underlie observed differences in how quickly those regions adapt and learn.

Author summary

The brain can efficiently learn and form memories based on limited exposure to stimuli. One key factor believed to support this ability is the way neural circuits encode and organize information about the external world. Population sparseness refers to the phenomenon in which, in many brain regions, only a small subset of neurons is active at any given time or in response to a particular stimulus. Sparse codes are believed to reduce overlap between representations and minimize interference and can thus enhance the storage capacity of a network. Here we investigate the effect of population sparseness on the speed of learning input-output associations in a network model with Hebbian learning. We find that the learning speed that yields the maximal capacity increases with increasing sparseness. The maximal capacity increases for sparser input and output representations but in the first case only up to a certain point before sparseness becomes destructive. These findings could contribute to explaining why some brain regions learn faster than others.

Article activity feed