Adjoint propagation of error signal through modular recurrent neural networks for biologically plausible learning
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Biologically plausible learning mechanisms have implications for understanding brain functions and engineering intelligent systems. Inspired by the multi-scale recurrent connectivity in the brain, we introduce an adjoint propagation (AP) framework, in which the error signals arise naturally from recurrent dynamics and propagate simultaneously with forward inference signals. This framework eliminates the biologically implausible feedback required by the backpropagation (BP) algorithm. We demonstrate that AP succeeds in training on standard benchmark tasks, achieving accuracies (97.47% for MNIST, 89.12% for FMNIST) comparable to BP-trained networks while adhering to neurobiological constraints. The training process exhibits robustness, maintaining performance over extended training epochs. AP inherits the modularity of multi-region recurrent neural network (MR-RNN) models and leverages the convergence properties of RNN modules to facilitate fast and scalable training. Importantly, AP supports flexible resource allocation for different cognitive tasks, consistent with observations in neuroscience. This framework bridges artificial and biological learning principles, paving the way for energy-efficient intelligent systems inspired by the brain.