The geometry of robustness in spiking neural networks

Curation statements for this article:
  • Curated by eLife

    eLife logo

    Evaluation Summary:

    The article introduces a geometrical interpretation for the dynamics and function of certain spiking networks, based on earlier work of Machens and Deneve. Given that spiking networks are notoriously hard to understand, the approach could prove useful for many computational neuroscientists. Here, that visualization tool serves to assess how fragile the network is to perturbation of its parameters, such as neuronal death, or spurious noise in excitation and inhibition.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #2 and Reviewer #3 agreed to share their name with the authors.)

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a 'bounding box'. Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks — low-dimensional representations, heterogeneity of tuning, and precise negative feedback — may be key to understanding the robustness of neural systems at the circuit level.

Article activity feed

  1. Evaluation Summary:

    The article introduces a geometrical interpretation for the dynamics and function of certain spiking networks, based on earlier work of Machens and Deneve. Given that spiking networks are notoriously hard to understand, the approach could prove useful for many computational neuroscientists. Here, that visualization tool serves to assess how fragile the network is to perturbation of its parameters, such as neuronal death, or spurious noise in excitation and inhibition.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #2 and Reviewer #3 agreed to share their name with the authors.)

  2. Joint Public Review:

    This paper is about networks of spiking neurons that represent continuous real-world variables, and specifically the robustness of such networks to perturbation, such as the loss of neurons, or the occurrence of synaptic noise. The senior author (along with Deneve) has developed a framework for recurrently coupled networks of spiking neurons that act as optimal encoders. The design of these networks starts with a presumed target for the readout (here, an autoencoder) and derives from that input weights, connectivity, and dynamics of the network.

    The optimal encoding framework links many network parameters - such as the spike threshold, feedforward and recurrent connection weights, and the decoder weights. Under this optimal construction, each neuron in the network fires only when necessary to improve the output, and that spike propagates through the network in a very specific manner. This suggests a certain fragility to the framework. Because biology is noisy, one cannot expect all these parameters to remain perfectly adjusted. A study of how this network responds to various network perturbations is thus an essential step in characterizing this overall coding framework and its relation to real networks.

    This paper also introduces a valuable geometric tool to interpret the function of the network: a "bounding box" in the space of the encoded variables that limits the errors the network makes. By noting how the boundaries of this box relate to the neuronal thresholds and synaptic weights one gains an intuition for the effects of various perturbations. This tool may well have applications beyond the specific use case treated here.

    The paper is well written with expert use of pictures to visualize complex relationships. However, the reviewers were left with concerns about the consistency of the bounding box picture under certain perturbations. Other questions including the role that the specific network connectivity plays in the various forms of robustness and fragility, and the role of plasticity in setting up the network connections.