Efficient learning and intrinsic noise filtering in recurrent spiking neural networks trained with e-prop
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Objective
Biologically plausible learning rules for neural networks, such as e-prop (eligibility propagation), are essential both for advancing neuromorphic computing and for understanding fundamental mechanisms of learning in animal brains. However, their behavior under different network conditions remains unclear.
Approach
Here, we investigate the performance of the e-prop learning algorithm in recurrent spiking neural networks (RSNNs) across different levels of recurrent connectivity and input noise using a complex temporal credit assignment task, a supervised classification problem known to be solvable by rodents.
Main results
We show that increased sparsity in the recurrent layer significantly enhances learning performance by promoting the generation of more diverse activation patterns. Analysis of the network’s evolution further reveals that the e-prop-trained input layer evolves to route distinct inputs to different regions of the recurrent layer while suppressing the contribution of noise. This partially resembles signal routing functions attributed to the thalamus in mammalian sensory systems, providing additional support for the biological plausibility of e-prop.
Significance
These findings offer promising insights for efficiency and advantages of biologically inspired training in RSNNs.