An Algorithmic Barrier to Neural Circuit Understanding

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Neuroscience is witnessing extraordinary progress in experimental techniques, especially at the neural circuit level. These advances are largely aimed at enabling us to understand how neural circuit computations mechanistically cause behavior. Here, using techniques from Theoretical Computer Science, we examine how many experiments are needed to obtain such an empirical understanding. It is proved, mathematically, that establishing the most extensive notions of understanding need exponentially-many experiments in the number of neurons, in general, unless a widely-posited hypothesis about computation is false. Worse still, the feasible experimental regime is one where the number of experiments scales sub-linearly in the number of neurons, suggesting a fundamental impediment to such an understanding. Determining which notions of understanding are algorithmically tractable, thus, becomes an important new endeavor in Neuroscience.

Article activity feed

  1. This manuscript is in revision at eLife

    The decision letter after peer review, sent to the authors on June 30 2020, follows:

    Summary:

    This paper addresses a critical issue in neuroscience: what's the question, and can we answer it? The questions the author proposes are ones that have been considered, in one form or another, reasonably often by experimentalists. And the author shows rigorously that there's a reasonable chance that they are simply not answerable.

    Essential Revisions:

    We believe that this is an extremely important issue, and the approach the paper takes is a reasonable one for addressing it. Our main worry, though, is that mainstream neuroscientists will ignore it, for two reasons. One is that it's not a message they want to hear. Second, the example circuits are sufficiently abstract that they can be dismissed as yet another musing by your typical uninformed theorists. (That is not, we should emphasize, our view, but it's not an uncommon one in the field.) Our goal, therefore, is to fix these potential problems, so that people will have to pay attention.

    The premise of the paper is that if you understand a neural circuit, there are certain questions about it that you should be able to answer. The author proposes six such questions, and then shows that in the worst case they are exponentially (in the number of neurons) hard to answer.

    The success of this program hinges on two things: a sensible set of questions, and a demonstration that answering those questions is hard. We're not ecstatic about the questions, but we believe that's not an insurmountable issue (more on that below). More problematic is the result that the questions are hard to answer. What's really shown is that there is at least one circuit for which, in the worst case, answering the questions is exponentially hard. While this is certainly correct, it doesn't make a convincing case that answering these questions will be hard in the brain. First, the worst case may not be the typical case. The 3 SAT problem, for instance, is NP complete, but is hard to solve only for a narrow range of parameters. Second, answering the questions for actual circuits found in the brain may not even be exponentially hard in the worst case.

    This brings us to two critical comments. First, it needs to be crystal clear that this paper does not demonstrate that answering the proposed questions is guaranteed to be exponentially hard, only that it might be. This was stated in the manuscript, but not emphasized. For instance, on lines 138-140, it says

    "Using techniques from Computational Complexity Theory, we ask what is the smallest number of experiments necessary, in general, in order to answer these questions, in typical experimental settings."

    Here "in general" means worst-case. For neuroscientists, though, "in general" means "most of the time". It should be clear that you mean worst case, and that the typical case may be very different.

    In fact, this needs an expanded discussion. Whether or not it will take an exponential number of experiments to answer the questions depends on the circuit. We might get lucky, and only a small number of experiments are needed. Or we might get unlucky, and a large number are needed. This analysis can't tell us that, and this should be clear in the paper.

    Second, what's really needed is the analysis of a more realistic circuit, ideally with both positive examples (for which it is possible to answer the questions) and negative examples (for which it isn't). This is hard, but we have a few suggestions, some of which can probably be done without a huge amount of work.

    a. Linear network, y=Ax+noise. For this (and possibly in all realistic) situations, "perform the task" needs to be replaced by "achieve a certain level of performance". For instance, if there's a true mapping y=f(x), then "perform the task" would be replaced with "<(y-y*)^2> below some threshold". The questions should be answerable in polynomial time for this network; otherwise, one should worry.

    b. In 2000, Hopfield and Brody came up with a simple circuit which we think of as "understandable" (Hopfield and Brody, PNAS 97:25, 13919-13924, 2000). It would be nice to determine whether the questions can be answered in polynomial time for this circuit. Again, if they can't, one should worry.

    c. Deep networks. Again, "perform the task" would have to be replaced with "performance is above some particular threshold". Here we suspect that the questions are not answerable; if that could be shown, it would be a huge step forward.

    d. A made-up model of a deep network. Assume that in a deep network, whenever you delete a neuron, performance drops. That's probably not so far from the truth -- and also not so far from what we think would happen in the brain. (With some exceptions; occasionally I hear talks where performance is better when two areas are ablated rather than just one, but let's ignore that.) How much performance drops depends, of course, on which neurons are deleted, so there's not a simple mapping between performance and which neurons are present in the circuit. Can the questions be answered in this case? This sounds like a problem computer scientists have considered, so possibly rigorous analysis could be done.

    We believe it's critical to consider a case that is not far from what one might find in the brain. Otherwise, it will be too easy to dismiss this work as irrelevant to real neuroscience. The above are only possibilities, and a and d may be pretty easy, but the author is welcome to come up with his own examples. Note that rigor is not absolutely necessary here, since there's already one rigorous example. Plausible arguments would be fine.

    Finally, "understand" needs further discussion. That's partly because the approach here is a little non-standard. Most people try to directly define "understanding". Instead, the statement is "if you understand a circuit, you should be able to answer these questions". This has to be made crystal clear -- especially since people aren't expecting it. In addition, a discussion of the more standard approach, a direct definition, should be included. The usual definition is something like "A short description of what is being computed, along with a description of an algorithm for computing it." It should be clear how this, more standard, definition compares to the one in the paper. For instance, under the standard definition it may be possible to understand a circuit without being able to answer any of the questions. For instance, I believe we can "understand" (by the more standard definition) the synfire chain circuit. This doesn't mean that one definition is better than the other, but their differences should be acknowledged.