Rapidly Reconfigurable Dynamic Computing in Neural Networks with Fixed Synaptic Connectivity

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Learning and memory in the brain’s neocortex have long been hypothesised to be primarily mediated by synaptic plasticity. Extensive research in artificial neural networks has shown that training networks by adjusting connection weights faces computational challenges, including large parameter spaces and the tendency of new learning to interfere with previous learning (catastrophic forgetting). We propose that the brain, which is resistant to these challenges, can also learn by modulating the excitability of each neuron in a network rather than changing synaptic strengths. We show here that learning a task-specific set of bias currents enables a feedforward or recurrent network with fixed and randomly assigned connections to perform well on and switch between dozens of tasks, including regression, classification, autonomous time series generation, a game and robotic control. Bias-only learning also provides a novel mechanistic explanation for representational drift. It directly links the noise robustness of neuronal representations on short and long time scales to the ability of neural circuits to preserve learned information while remaining adaptable. We postulate that subcortical structures, such as the basal ganglia or cerebellum, may provide similar bias inputs to the neocortex for rapid task learning and robustness against interference.

Article activity feed