DelRec: learning delays in recurrent spiking neural networks
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Biological neurons transmit information with stereotyped electrical impulses called ``spikes'', sensitive to coincident timings. Spiking Neural Networks (SNNs), introduced in the nineties, have gained popularity in AI for their energy efficiency and competitive performance with deep learning. Among them, Recurrent SNNs (RSNNs) are particularly appealing for their ability to learn long-term dependencies and exhibit rich dynamics. In SNNs, each connection can have a weight and a transmission delay, both plastic in the brain. While theory has long suggested that trainable delays enhance a network's expressivity, practical learning methods emerged only recently and remain mostly limited to feedforward delays. Here, we introduce DelRec, the first method to jointly optimize recurrent delays with synaptic weights in RSNNs via surrogate gradient learning, compatible with any spiking neuron model. DelRec works in discrete time, leveraging differentiable interpolation to handle non-integer delays with well-defined gradients at training time, then rounding them for inference. Using simple neurons, DelRec outperforms all baselines on a chaotic time-series prediction task, and sets new state-of-the-art accuracies on two challenging temporal datasets. Analysis of trained networks reveals structured, depth-dependent spatio-temporal receptive fields and delay-weight co-adaptation reshaping temporal selectivity. This work establishes recurrent delay optimization as a promising framework for both biological circuit modeling and neuromorphic computing.