Model discovery to link neural activity to behavioral tasks

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    This useful paper describes a sensitive method for identifying the contributions of different behavioral and stimulus parameters to neural activity. The method has been convincingly validated using simulated data and applied to example state of the art data sets from mouse and zebrafish. The method could be productively applied to a wide range of experiments in behavioral and systems neuroscience, but it remained unclear how it relates to or improves on similar, existing methods.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Brains are not engineered solutions to a well-defined problem but arose through selective pressure acting on random variation. It is therefore unclear how well a model chosen by an experimenter can relate neural activity to experimental conditions. Here, we developed ‘model identification of neural encoding (MINE).’ MINE is an accessible framework using convolutional neural networks (CNNs) to discover and characterize a model that relates aspects of tasks to neural activity. Although flexible, CNNs are difficult to interpret. We use Taylor decomposition approaches to understand the discovered model and how it maps task features to activity. We apply MINE to a published cortical dataset as well as experiments designed to probe thermoregulatory circuits in zebrafish. Here, MINE allowed us to characterize neurons according to their receptive field and computational complexity, features that anatomically segregate in the brain. We also identified a new class of neurons that integrate thermosensory and behavioral information that eluded us previously when using traditional clustering and regression-based approaches.

Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    In this paper, the authors present a method for discovering response properties of neurons, which often have complex relationships with other experimentally measured variables, like stimuli and animal behaviors. To find these relationships, the authors fit neural data with artificial neural networks, which are chosen to have an architecture that is tractable and interpretable. To interpret the results, they examine the first- and second-order approximations of the fitted artificial neural network models. They apply their method profitably to two datasets.

    The strength of this paper is in the problem it is attempting to solve: it is important for the field to develop more useful ways to analyze and understand the massive neural datasets collected with modern imaging techniques.

    The weaknesses of this paper lie in its claims (1) to be model free and (2) to distinguish the method from prior methods for systems identification, including spike triggered averaging and covariance (or rather their continuous response equivalents). On the first claim, the systems identification methods are arguably substantially more model free approach. On the second claim, this reviewer would require more evidence that the presented approach is substantially different from or an improvement on systems identification methods in common use applied directly to the data.

    We thank the reviewer for carefully engaging with the manuscript and believe that our revisions address these points of critique both through novel analysis and through clarifications.

    First claim: We fully agree that systems identification approaches are in theory truly model-free while MINE imposes constraints through the chosen architecture. However, our new analysis comparing MINE to direct fitting of the kernels of a Volterra expansion highlights that this is not really the case in practice. In order to obtain good fits, the model-free-ness has to be substantially reduced by imposing constraints on the degrees of freedom. We quantify this reduction in Figure S3 and directly compare it to the effective degrees of freedom of the CNN. Reducing degrees of freedom is also a theme that can be found throughout the literature on systems-identification, especially when the analysis does not involve Gaussian white noise as input stimuli. We therefore stand by our claim that MINE is “essentially model-free” in the sense that it does not rely on defining a model a-priori much like systems identification. And we also clarify our choice of calling the method “model-free” in the introduction where we state: “While the architecture and hyper-parameters of the CNN used by MINE do impose constraints on which relationships can be modeled, we consider the convolutional network ``model-free’’ because it does not make any explicit assumptions about the underlying probability distributions or functional forms of the data.”

    Second claim: We believe that our new analysis for the comparison with the Volterra expansion approach of systems identification addresses this point. By directly fitting Volterra kernels instead of relying on spike-triggered analysis we put the comparison on a more equal footing than our previous STA/STC exposition. We can show that while the methods are equivalent for Gaussian white noise stimuli, MINE is superior for highly correlated input stimuli. We show that imposing constraints on the regression used to identify the Volterra kernels can overcome this gap to a large extent, but MINE still produces a model that has higher predictive power and MINE also does more than extracting receptive fields. We are also not entirely sure to what extent Wiener/Volterra analysis has been applied to calcium imaging data. While there is a vast body of literature on systems identification, there is little evidence that it has been widely applied to data in which both inputs and outputs are highly correlated across time, such as calcium imaging experiments using naturalistic stimuli. While this doesn’t have to mean anything in and of itself it might point to the fact that this analysis is not easily accessible and requires ample tuning. These are precisely two problems that MINE aims to overcome. We now more explicitly state in the manuscript that we believe this accessibility to be one of the core strengths of MINE.

    Reviewer #2 (Public Review):

    This paper describes a relatively unbiased and sensitive method for identifying the contributions of different behavioral parameters to neural activity. Their approach addresses, in an elegant way, several difficulties that arise in modeling of neuronal responses in population imaging data, namely variations in temporal filtering and latency, the effects of calcium indicator kinetics, interactions between different variables, and non-linear computations. Typical approaches to solving these problems require the introduction of prior knowledge or assumptions that bias the output, or involve a trade-off between model complexity and interpretability. The authors fit individual neuron's responses using neural network models that allow for complex non-linear relationships between behavioral variables and outputs, but combine this with analysis, based on Taylor series approximations of the network function, that gives insight into how different variables are contributing to the model.

    The authors have thoroughly validated their method using simulated data as well as showing its applicability to example state of the art data sets from mouse and zebrafish. They provide evidence that it can outperform current approaches based on linear regression for the identification of neurons carrying behaviorally relevant signals. They also demonstrate use cases showing how their approach can be used to classify neurons based on computational features. They have provided Python code for the implementation and have explained the methods well, so it will be easy for other groups to replicate their work. The method could be applied productively to many types of experiments in behavioral and systems neuroscience across different model systems. Overall, the paper is clearly written and the experiments are well designed and analysed, and represent a useful contribution to the neuroscience field.

    We thank the reviewer for their favorable assessment of our work.

    Reviewer #3 (Public Review):

    In the current study, the authors present a novel and original approach (termed MINE) to analyze neuronal recordings in terms of task features. The method proposed combines the interpretability of regressor-based methods with the flexibility of convolutional neural networks and the aim is to provide an unbiased, "model-free" approach to this very important problem.

    In my opinion, the authors succeed in most of these aspects. They use three datasets: an artificially-generated one that provides a ground-truth, a published dataset from wide-scale cortical mouse recordings and a novel one that studies thermosensation in larval zebrafish. MINE compares favorably in all three cases.

    I believe that the paper would mostly benefit from an increased effort in clear exposition of the Taylor expansion approach, which is at the core of the method. The methods section describes the mathematics, but I wonder whether it would be possible to illustrate or schematize this in a main Figure, e.g. as an addition to Figure 1 or as a new figure. Around line 185, the manuscript reads: "We therefore perform local Taylor expansions of the network at different experimental timepoints. In other words, we differentiate the network's learned transfer function that transforms predictors into neural activity."

    It would help to explicitly state with respect to what the derivative is being computed (i.e. time) and maybe a diagram (which I had to draw to understand the paper) in which a neuronal activity trace is shown and from time t onwards a prediction is computed using terms in the Taylor expansion would be very instructive (showing on an actual trace how disregarding certain terms changes the prediction and hence the conclusions about the actual dependence of the trace on the behavioral features). The formulation in terms of Jacobians and Hessians can then be restricted to the Methods section and the paper will be easier to read for a wider audience.

    We agree with the reviewer that readability is key. We hope that our re-write and re-organization of the manuscript makes it easier to follow. We now start with a unified description of complexity and non-linearity both derived from a Taylor decomposition around the data-average. We use this section (starting Line 91) to lay out the logic of the Taylor expansion and explicitly state that the derivatives describe the expected change in output given any change in predictors. We did not want to remove the math entirely from the paper, simply because we found it hard to explain the concept entirely without it. We have provided an annotation to the formula parts in the new Figure 2 and a small schematic to illustrate the pointwise expansion of the Taylor metric in the new Figure 4.

    The method is presented as a "model-free" approach (title and introduction). I think it would help to discuss this with some precision. The Taylor expansion approach does imply certain beliefs on the structure of the data (which are well founded in most cases). Do the authors agree that MINE would encapsulate any regression model where both linear and interaction terms are allowed to include an arbitrary non-linearity (in the case of the interaction terms, different non-linearities for both variables)? If this is the case, maybe an explicit statement would allow the reader to quickly identify the versatility of MINE.

    We are now attempting to make the statement of model-free more precise through quantifications in our rewritten section on deriving receptive fields. We now provide an explanation in the introduction for why we believe that “model-free” is justified. We state: “While the architecture and hyper-parameters of the CNN used by MINE do impose constraints on which relationships can be modeled, we consider the convolutional network ``model-free’’ because it does not make any explicit assumptions about the underlying probability distributions or functional forms of the data.”

    In principle, MINE can accommodate higher-order interactions as well (say of the form xyz or x*y^2) and it certainly has flexibility in applying nonlinear transformations. However, we did not find a satisfying way to quantify the space of possible models MINE can represent exactly and therefore do not feel comfortable to make a precise statement about this.

    I find the section relating to non-linearities interesting, but was slightly disappointed to find that the authors do not propose a single method. In Figure 3E, the authors show that a logistic regression model that combines the curvature and NLC apporaches outperforms either, but the model is not described in any sort of detail. I appreciate the attempt made by the authors to apply this to the zebrafish imaging dataset in Figure 7, but it was still unclear to me how non-linearities and complexity are related.

    We fully agree with the reviewer. We have now merged non-linearity and complexity determination. We hope that this a) simplifies the paper and b) creates a metric that likely generalizes better and in which specific values are more interpretable. In brief, we now define both the nonlinearity and complexity based on truncations of the Taylor expansion around the data average. This new result section (Lines 90-142) also gives us a chance to (hopefully) better introduce the Taylor expansion approach.

  2. eLife assessment

    This useful paper describes a sensitive method for identifying the contributions of different behavioral and stimulus parameters to neural activity. The method has been convincingly validated using simulated data and applied to example state of the art data sets from mouse and zebrafish. The method could be productively applied to a wide range of experiments in behavioral and systems neuroscience, but it remained unclear how it relates to or improves on similar, existing methods.

  3. Reviewer #1 (Public Review):

    In this paper, the authors present a method for discovering response properties of neurons, which often have complex relationships with other experimentally measured variables, like stimuli and animal behaviors. To find these relationships, the authors fit neural data with artificial neural networks, which are chosen to have an architecture that is tractable and interpretable. To interpret the results, they examine the first- and second-order approximations of the fitted artificial neural network models. They apply their method profitably to two datasets.

    The strength of this paper is in the problem it is attempting to solve: it is important for the field to develop more useful ways to analyze and understand the massive neural datasets collected with modern imaging techniques.

    The weaknesses of this paper lie in its claims (1) to be model free and (2) to distinguish the method from prior methods for systems identification, including spike triggered averaging and covariance (or rather their continuous response equivalents). On the first claim, the systems identification methods are arguably substantially more model free approach. On the second claim, this reviewer would require more evidence that the presented approach is substantially different from or an improvement on systems identification methods in common use applied directly to the data.

  4. Reviewer #2 (Public Review):

    This paper describes a relatively unbiased and sensitive method for identifying the contributions of different behavioral parameters to neural activity. Their approach addresses, in an elegant way, several difficulties that arise in modeling of neuronal responses in population imaging data, namely variations in temporal filtering and latency, the effects of calcium indicator kinetics, interactions between different variables, and non-linear computations. Typical approaches to solving these problems require the introduction of prior knowledge or assumptions that bias the output, or involve a trade-off between model complexity and interpretability. The authors fit individual neuron's responses using neural network models that allow for complex non-linear relationships between behavioral variables and outputs, but combine this with analysis, based on Taylor series approximations of the network function, that gives insight into how different variables are contributing to the model.

    The authors have thoroughly validated their method using simulated data as well as showing its applicability to example state of the art data sets from mouse and zebrafish. They provide evidence that it can outperform current approaches based on linear regression for the identification of neurons carrying behaviorally relevant signals. They also demonstrate use cases showing how their approach can be used to classify neurons based on computational features. They have provided Python code for the implementation and have explained the methods well, so it will be easy for other groups to replicate their work. The method could be applied productively to many types of experiments in behavioral and systems neuroscience across different model systems. Overall, the paper is clearly written and the experiments are well designed and analysed, and represent a useful contribution to the neuroscience field.

  5. **Reviewer #3 (Public Review):
    **
    In the current study, the authors present a novel and original approach (termed MINE) to analyze neuronal recordings in terms of task features. The method proposed combines the interpretability of regressor-based methods with the flexibility of convolutional neural networks and the aim is to provide an unbiased, "model-free" approach to this very important problem.

    In my opinion, the authors succeed in most of these aspects. They use three datasets: an artificially-generated one that provides a ground-truth, a published dataset from wide-scale cortical mouse recordings and a novel one that studies thermosensation in larval zebrafish. MINE compares favorably in all three cases.

    I believe that the paper would mostly benefit from an increased effort in clear exposition of the Taylor expansion approach, which is at the core of the method. The methods section describes the mathematics, but I wonder whether it would be possible to illustrate or schematize this in a main Figure, e.g. as an addition to Figure 1 or as a new figure. Around line 185, the manuscript reads: "We therefore perform local Taylor expansions of the network at different experimental timepoints. In other words, we differentiate the network's learned transfer function that transforms predictors into neural activity."

    It would help to explicitly state with respect to what the derivative is being computed (i.e. time) and maybe a diagram (which I had to draw to understand the paper) in which a neuronal activity trace is shown and from time t onwards a prediction is computed using terms in the Taylor expansion would be very instructive (showing on an actual trace how disregarding certain terms changes the prediction and hence the conclusions about the actual dependence of the trace on the behavioral features). The formulation in terms of Jacobians and Hessians can then be restricted to the Methods section and the paper will be easier to read for a wider audience. The method is presented as a "model-free" approach (title and introduction). I think it would help to discuss this with some precision. The Taylor expansion approach does imply certain beliefs on the structure of the data (which are well founded in most cases). Do the authors agree that MINE would encapsulate any regression model where both linear and interaction terms are allowed to include an arbitrary non-linearity (in the case of the interaction terms, different non-linearities for both variables)? If this is the case, maybe an explicit statement would allow the reader to quickly identify the versatility of MINE.

    I find the section relating to non-linearities interesting, but was slightly disappointed to find that the authors do not propose a single method. In Figure 3E, the authors show that a logistic regression model that combines the curvature and NLC apporaches outperforms either, but the model is not described in any sort of detail. I appreciate the attempt made by the authors to apply this to the zebrafish imaging dataset in Figure 7, but it was still unclear to me how non-linearities and complexity are related.