Going deeper with morphologically detailed neural networks by error-backpropagating mirror neuron
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Neuronal dendrites possess powerful computational capabilities but are computationally expensive to simulate. As a result, exploring how large-scale, detailed multi-compartment neural networks can learn is challenging, with significant obstacles in both developing learning algorithms and building suitable simulation infrastructure. Here, we present an extension of the DeepDendrite framework that enables the construction and data-driven training of deep, detailed multi-compartment neural networks using highly modular layer components. In our approach, the gradient at each layer is computed by simulating mirror neurons in a feedback pathway concurrently with the detailed neurons in the feedforward pathway. This mechanism provides a biologically inspired implementation of backpropagation in deep, morphologically detailed networks. We evaluate our framework on classic image classification benchmarks, achieving performance comparable to standard artificial neural networks (ANNs) with the same architectures. Furthermore, adversarial attack experiments show that detailed networks are consistently more robust than their ANN counterparts, and weight-transfer tests confirm that this robustness gain stems from dendritic computations rather than the specific weight values. In conclusion, our framework offers a valuable tool for investigating learning in deep, detailed neural networks and may unlock new opportunities to harness the computational potential of dendrites at scale.