Direct Training of Networks of Morris-Lecar Neurons with Backprop
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Spiking Neural Networks (SNNs) have the potential to replicate the brain’s computational efficacy by explicitly incorporating action potentials or “spikes”, which is not a feature of most artificial neural networks. However, training SNNs is difficult due to the non-differentiable nature of the most common spiking models: integrate-and-fire neurons. This study investigates if some of the difficulty in training SNNs arises from the use of integrate-and-fire neurons, rather than smoother alternatives, like conductance-based neurons. To that end, we considered networks of Morris-Lecar (ML) neurons, a conductance-based neuron model which is differentiable. Networks were built using kinetic synaptic models that smoothly link presynaptic voltage dynamics directly to postsynaptic conductance changes, ensuring that all components remain fully differentiable. Switching to biophysically detailed models of synapses and neurons enabled direct end-to-end training through Backpropagation Through Time (BPTT). Biophysically detailed networks were successfully trained on image classification, regression, and time series prediction tasks. These results demonstrate the feasibility of employing biophysically detailed differentiable point neuron models to create SNNs that function as more accurate paradigms for the study of neural computations and learning. Further, this work confirms that some aspects of the difficulty in translating gradient-based learning algorithms from machine learning may arise from model choice, rather than SNNs being intrinsically difficult to train.
Author summary
The brain’s information-processing efficiency arises in part from neurons communicating via discrete spikes. Spiking Neural Networks (SNNs) mimic this process at the neuronal level but have been difficult to train as most machine learning algorithms are not directly applicable. Most SNNs use integrate-and-fire neurons, a modelling framework that simplifies spikes into non-differentiable, abrupt voltage changes, which makes them difficult to train with powerful, standard AI training methods that use derivatives to compute gradients (e.g. Backprop). In our work, we asked if this difficulty could be overcome by considering end-to-end differentiable spiking neural networks. We used completely differentiable SNNs using the Morris-Lecar neuron, a biophysically detailed neuron model that produces smooth spikes, along with differentiable kinetic synapses. With the entire network being mathematically differentiable, we found that we could train it directly using standard backpropagation through time on different tasks (regression, classification, and chaotic time series prediction). This work demonstrates that the use of integrate-and-fire models may be limiting applications of machine learning algorithms towards understanding how learning functions in the brain.