Dynamic organization of visual cortical networks inferred from massive spiking datasets

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife Assessment

    The authors describe a model for tracking time-varying functional connectivity between neurons from multi-electrode spike recordings. This is an interesting and potentially useful approach to an open problem in neural data analysis, and could be an essential tool for investigating the neural code from large-scale in-vivo recordings of spiking activity. However, the evidence is incomplete: systematic comparisons with existing methods and/or demonstration of its utility relative to conventional methods are essential to demonstrate the usefulness of the method.

This article has been Reviewed by the following groups

Read the full article

Abstract

Complex cognitive functions in a mammalian brain are distributed across many anatomically and functionally distinct areas and rely on highly dynamic routing of neural activity across the network. While modern electrophysiology methods enable recording of spiking activity from increasingly large neuronal populations at a cellular level, development of probabilistic methods to extract these dynamic inter-area interactions is lagging. Here, we introduce an unsupervised machine learning model that infers dynamic connectivity across the recorded neuronal population from a synchrony of their spiking activity. As opposed to traditional population decoding models that reveal dynamics of the whole population, the model produces cellular-level cell-type specific dynamic functional interactions that are otherwise omitted from analysis. The model is evaluated on ground truth synthetic data and compared to alternative methods to ensure quality and quantification of model predictions. Our strategy incorporates two sequential stages – extraction of static connectivity structure of the network followed by inference of temporal changes of the connection strength. This two-stage architecture enables detailed statistical criteria to be developed to evaluate confidence of the model predictions in comparison with traditional descriptive statistical methods. We applied the model to analyze large-scale in-vivo recordings of spiking activity across mammalian visual cortices. The model enables the discovery of cellular-level dynamic connectivity patterns in local and long-range circuits across the whole visual cortex with temporally varying strength of feedforward and feedback drives during sensory stimulation. Our approach provides a conceptual link between slow brain-wide network dynamics studied with neuroimaging and fast cellular-level dynamics enabled by modern electrophysiology that may help to uncover often overlooked dimensions of the brain code.

Article activity feed

  1. eLife Assessment

    The authors describe a model for tracking time-varying functional connectivity between neurons from multi-electrode spike recordings. This is an interesting and potentially useful approach to an open problem in neural data analysis, and could be an essential tool for investigating the neural code from large-scale in-vivo recordings of spiking activity. However, the evidence is incomplete: systematic comparisons with existing methods and/or demonstration of its utility relative to conventional methods are essential to demonstrate the usefulness of the method.

  2. Reviewer #1 (Public review):

    Summary:

    This work proposes a new method, DyNetCP, for inferring dynamic functional connectivity between neurons from spike data. DyNetCP is based on a neural network model with a two-stage model architecture of static and dynamic functional connectivity.
    This work evaluates the accuracy of the synaptic connectivity inference and shows that DyNetCP can infer the excitatory synaptic connectivity more accurately than a state-of-the-art model (GLMCC) by analyzing the simulated spike trains. Furthermore, it is shown that the inference results obtained by DyNetCP from large-scale in-vivo recordings are similar to the results obtained by the existing methods (jitter-corrected CCG and JPSTH). Finally, this work investigates the dynamic connectivity in the primary visual area VISp and in the visual areas using DyNetCP.

    Strengths:

    The strength of the paper is that it proposes a method to extract the dynamics of functional connectivity from spike trains of multiple neurons. The method is potentially useful for analyzing parallel spike trains in general, as there are only a few methods (e.g. Aertsen et al., J. Neurophysiol., 1989, Shimazaki et al., PLoS Comput Biol 2012) that infer the dynamic connectivity from spikes. Furthermore, the approach of DyNetCP is different from the existing methods: while the proposed method is based on the neural network, the previous methods are based on either the descriptive statistics (JSPH) or the Ising model.

    Weaknesses:

    Although the paper proposes a new method, DyNetCP, for inferring the dynamic functional connectivity, its strengths are neither clear nor directly demonstrated in this paper. That is, insufficient analyses are performed to support the usefulness of DyNetCP.
    First, this paper attempts to show the superiority of DyNetCP by comparing the performance of synaptic connectivity inference with GLMCC (Fig. 2). However, the improvement in the synaptic connectivity inference does not seem to be convincing. While this paper compares the performance of DyNetCP with a state-of-the-art method (GLMCC), there are several problems with the comparison. For example,

    (1) It is unclear how accurately the proposed method can infer the dynamic connectivity.
    (2) This paper does not compare with existing approach (e.g., classical JPSTH: Aertsen et al., J. Neurophysiol., 1989, and other methods : Stevenson and Koerding, NIPS, 2011; Linderman et al., NIPS, 2014; Song et al., J. Neurosci. Methods, 2015), and
    (3) only a population of neurons generated from the Hodgkin-Huxley model was evaluated.

    Thus, the results in this paper are not sufficient to conclude the superiority of DyNetCP in the estimation of synaptic connections. In addition, this paper compares the proposed method with the standard statistical methods Jitter-corrected CCG (Fig. 3) and JPSTH (Fig. 4). Unfortunately, these results do not show the superiority of the proposed method. It only shows that the results obtained by the proposed method are consistent with those obtained by the existing methods (CCG or JPSTH). This paper also compares the proposed method with the standard statistical methods, such as jitter-corrected CCG (Fig. 3) and JPSTH (Fig. 4). It only shows that the results obtained by the proposed method are consistent with those obtained by the existing methods (CCG or JPSTH), which does not show the superiority of the proposed method.

    In summary, although DyNetCP has the potential to infer the dynamic (time-dependent) correlation more accurately than existing methods, the paper does not provide sufficient analysis to make this claim. It is also unclear whether the proposed method is superior to the existing methods for estimating functional connectivity, such as JPSTH and statistical approach (Stevenson and Koerding, NIPS, 2011; Linderman et al., NIPS, 2014). Thus, the strength of DyNetCP is unclear.

  3. Reviewer #2 (Public review):

    Summary:

    Here the authors describe a model for tracking time-varying coupling between neurons from multi-electrode spike recordings. Their approach extends a GLM with static coupling between neurons to include dynamic weights, learned by a long-short-term-memory (LSTM) model. Each connection has a corresponding LSTM embedding and is read-out by a multi-layer perceptron to predict the time-varying weight.

    Strengths:

    This is an interesting approach to an open problem in neural data analysis. I think, in general, the method would be interesting to computational neuroscientists.

    Weaknesses:

    It is somewhat difficult to interpret what the model is doing. I think it would be worthwhile to add some additional results that make it more clear what types of patterns are being described and how.

    Major Issues:

    Simulation for dynamic connectivity. It certainly seems doable to simulate a recurrent spiking network whose weights change over time, and I think this would be a worthwhile validation for this DyNetCP model. In particular, I think it would be valuable to understand how much the model overfits, and how accurately it can track known changes in coupling strength. If the only goal is "smoothing" time-varying CCGs, there are much easier statistical methods to do this (c.f. McKenzie et al. Neuron, 2021. Ren, Wei, Ghanbari, Stevenson. J Neurosci, 2022), and simulations could be useful to illustrate what the model adds beyond smoothing.

    Stimulus vs noise correlations. For studying correlations between neurons in sensory systems that are strongly driven by stimuli, it's common to use shuffling over trials to distinguish between stimulus correlations and "noise" correlations or putative synaptic connections. This would be a valuable comparison for Fig 5 to show if these are dynamic stimulus correlations or noise correlations. I would also suggest just plotting the CCGs calculated with a moving window to better illustrate how (and if) the dynamic weights differ from the data.

    Minor Issues:

    Introduction - it may be useful to mention that there have been some previous attempts to describe time-varying connectivity from spikes both with probabilistic models: Stevenson and Kording, Neurips (2011), Linderman, Stock, and Adams, Neurips (2014), Robinson, Berger, and Song, Neural Computation (2016), Wei and Stevenson, Neural Comp (2021) ... and with descriptive statistics: Fujisawa et al. Nat Neuroscience (2008), English et al. Neuron (2017), McKenzie et al. Neuron (2021).

    In the sections "Static DyNetCP ...reproduce". It may be useful to have some additional context to interpret the CCG-DyNetCP correlations and CCG-GLMCC correlations (for simulation). If I understand right, these are on training data (not cross-validated) and the DyNetCP model is using NM+1 parameters to predict ~100 data points (It would also be good to say what N and M are for the results here). The GLMCC model has 2 or 3 parameters (if I remember right?).

    In the section "Static connectivity inferred by the DyNetCP from in-vivo recordings is biologically interpretable"... I may have missed it, but how is the "functional delay" calculated? And am I understanding right that for the DyNetCP you are just using [w_i\toj, w_j\toi] in place of the CCG?

  4. Author response:

    The following is the authors’ response to the original reviews.

    We thank the reviewers for the constructive criticism and detailed assessment of our work which helped us to significantly improve our manuscript. We made significant changes to the text to better clarify our goals and approaches. To make our main goal of extracting the network dynamics clearer and to highlight the main advantage of our method in comparison with prior work we incorporated Videos 1-4 into the main text. We hope that these changes, together with the rest of our responses, convincingly demonstrate the utility of our method in producing results that are typically omitted from analysis by other methods and can provide important novel insights on the dynamics of the brain circuits.

    Reviewer #1 (Public Review):

    (1) “First, this paper attempts to show the superiority of DyNetCP by comparing the performance of synaptic connectivity inference with GLMCC (Figure 2).”

    We believe that the goals of our work were not adequately formulated in the original manuscript that generated this apparent misunderstanding. As opposed to most of the prior work focused on reconstruction of static connectivity from spiking data (including GLMCC), our ultimate goal is to learn the dynamic connectivity structure, i.e. to extract time-dependent strength of the directed connectivity in the network. Since this formulation is fundamentally different from most of the prior work, therefore the goal here is not to show the “improvement” or “superiority” over prior methods that mostly focused on inference of static connectivity, but rather to thoroughly validate our approach and to show its usefulness for the dynamic analysis of experimental data.

    (2) “This paper also compares the proposed method with standard statistical methods, such as jitter-corrected CCG (Figure 3) and JPSTH (Figure 4). It only shows that the results obtained by the proposed method are consistent with those obtained by the existing methods (CCG or JPSTH), which does not show the superiority of the proposed method.”

    The major problem for designing such a dynamic model is the virtual absence of ground-truth data either as verified experimental datasets or synthetic data with known time-varying connectivity. In this situation optimization of the model hyper-parameters and model verification is largely becoming a “shot in the dark”. Therefore, to resolve this problem and make the model generalizable, here we adopted a two-stage approach, where in the first step we learn static connections followed in the next step by inference of temporally varying dynamic connectivity. Dividing the problem into two stages enables us to separately compare the results of both stages to traditional descriptive statistical approaches. Static connectivity results of the model obtained in stage 1 are compared to classical pairwise CCG (Fig.2A,B) and GLMCC (Fig.2 C,D,E), while dynamic connectivity obtained in step 2 are compared to pairwise JPSTH (Fig.4D,E).

    Importantly, the goal here therefore is not to “outperform” the classical descriptive statistical or any other approaches, but rather to have a solid guidance for designing the model architecture and optimization of hyper-parameters. For example, to produce static weight results in Fig.2A,B that are statistically indistinguishable from the results of classical CCG, the procedure for the selection of weights which contribute to averaging is designed as shown in Fig.9 and discussed in details in the Methods. Optimization of the L2 regularization parameter is illustrated in Fig.4 – figure supplement 1 that enables to produce dynamic weights very close to cJPSTH as evidenced by Pearson coefficient and TOST statistical tests. These comparisons demonstrate that indeed the results of CCG and JPSTH are faithfully reproduced by our model that, we conclude, is sufficient justification to apply the model to analyze experimental results.

    (3) “However, the improvement in the synaptic connectivity inference does not seem to be convincing.”

    We are grateful for the reviewer to point out to this issue that we believe, as mentioned above, results from the deficiency of the original manuscript to clarify the major motivation for this comparison. Comparison of static connectivity inferred by stage 1 of our model to the results of GLMCC in Fig.2C,D,E is aimed at optimization of yet another two important parameters - the pair spike threshold and the peak height threshold. Here, in Fig. 2D we show that when the peak height threshold is reduced from rigorous 7 standard deviations (SD) to just 5 SD, our model recovers 74% of the ground truth connections that in fact is better than 69% produced by GLMCC for a comparable pair spike threshold of 80. As explained above, we do not intend to emphasize here that our model is “superior” since it was not our goal, but rather use this comparison to illustrate the approach for optimization of thresholds for units and pairs filtering as described in detail in Fig. 11 and corresponding section in Methods.

    To address these misunderstandings and better clarify the goal of our work we changed the text in the Introductory section accordingly. We also incorporated Videos 1-4 from the Supplementary Materials into the main text as Video 1, Video 2, Video 3, and Video 4. In fact, these videos represent the main advantage (or “superiority”) of our model with respect to prior art that enables to infer the time-dependent dynamics of network connectivity as opposed to static connections.

    (4) “While this paper compares the performance of DyNetCP with a state-of-the-art method (GLMCC), there are several problems with the comparison. For example:

    (a) This paper focused only on excitatory connections (i.e., ignoring inhibitory neurons).

    (b) This paper does not compare with existing neural network-based methods (e.g., CoNNECT: Endo et al. Sci. Rep. 2021; Deep learning: Donner et al. bioRxiv, 2024).

    (c) Only a population of neurons generated from the Hodgkin-Huxley model was evaluated.”

    (a) In general, the model of Eq.1 is agnostic to excitatory or inhibitory connections it can recover. In fact, Fig. 5 and Fig.6 illustrate inferred dynamic weights for both excitatory (red arrows) and inhibitory (blue arrows) connections between excitatory (red triangles) and inhibitory (blue circles) neurons. Similarly, inhibitory and excitatory dynamic interactions between connections are represented in Fig. 7 for the larger network across all visual cortices.

    (b) As stated above, the goal for the comparison of the static connectivity results of stage 1 of our model to other approaches is to guide the choice of thresholds and optimization of hyperparameters rather than claiming “superiority” of our model. Therefore, comparison with “static” CNN-based model of Endo et al. or ANN-based static model of Donner et al. (submitted to bioRxiv several months after our submission to eLife) is beyond the scope of this work.

    (c) We have chosen exactly the same sub-population of neurons from the synthetic HH dataset of Ref. 26 that is used in Fig.6 of Ref. 26 that provides direct comparison of connections reconstructed by GLMCC in the original Ref.26 and the results of our model.

    (5) “In summary, although DyNetCP has the potential to infer synaptic connections more accurately than existing methods, the paper does not provide sufficient analysis to make this claim. It is also unclear whether the proposed method is superior to the existing methods for estimating functional connectivity, such as jitter-corrected CCG and JPSTH. Thus, the strength of DyNetCP is unclear.”

    As we explained above, we have no intention to claim that our model is more accurate than existing static approaches. In fact, it is not feasible to have better estimation of connectivity than direct descriptive statistical methods as CCG or JPSTH. Instead, comparison with static (CCG and GLMCC) and temporal (JPSTH) approaches are used here to guide the choice of the model thresholds and to inform the optimization of hyper-parameters to make the prediction of the dynamic network connectivity reliable. The main strength of DyNetCP is inference of dynamic connectivity as illustrated in Videos 1-4. We demonstrated the utility of the method on the largest in-vivo experimental dataset available today and extracted the dynamics of cortical connectivity in local and global visual networks. This information is unattainable with any other contemporary methods we are aware of.

    Reviewer #1 (Recommendations for the Authors):

    (6) “First, the authors should clarify the goal of the analysis, i.e., to extract either the functional connectivity or the synaptic connectivity. While this paper assumes that they are the same, it should be noted that functional connectivity can be different from synaptic connectivity (see Steavenson IH, Neurons Behav. Data Anal. Theory 2023).”

    The goal of our analysis is to extract dynamics of the spiking correlations. In this paper we intentionally avoided assigning a biological interpretation to the inferred dynamic weights. Our goal was to demonstrate that a trough of additional information on neural coding is hidden in the dynamics of neural correlations. The information that is typically omitted from the analysis of neuroscience data.

    Biological interpretation of the extracted dynamic weights can follow the terminology of the shortterm plasticity between synaptically connected neurons (Refs 25, 33-37) or spike transmission strength (Refs 30-32,46). Alternatively, temporal changes in connection weights can be interpreted in terms of dynamically reconfigurable functional interactions of cortical networks (Refs 8-11,13,47) through which the information is flowing. We could not also exclude interpretation that combines both ideas. In any event our goal here is to extract these signals for a pair (video1, Fig.4), a cortical local circuit (Video 2, Fig.5), and for the whole visual cortical network (Videos 3, 4 and Fig.7).

    To clarify this statement, we included a paragraph in the discussion section of the revised paper.

    (7) “Finally, it would be valuable if the authors could also demonstrate the superiority of DyNetCP qualitatively. Can DyNetCP discover something interesting for neuroscientists from the large-scale in vivo dataset that the existing method cannot?”

    The model discovers dynamic time-varying changes in neuron synchronous spiking (Videos 1-4) that more traditional methods like CCG or GLMCC are not able to detect. The revealed dynamics is happening at the very short time scales of the order of just a few ms during the stimulus presentation. Calculations of the intrinsic dimensionality of the spiking manifold (Fig. 8) reveal that up to 25 additional dimensions of the neural code can be recovered using our approach. These dimensions are typically omitted from the analysis of the neural circuits using traditional methods.

    Reviewer #2 (Public Review):

    (1) “Simulation for dynamic connectivity. It certainly seems doable to simulate a recurrent spiking network whose weights change over time, and I think this would be a worthwhile validation for this DyNetCP model. In particular, I think it would be valuable to understand how much the model overfits, and how accurately it can track known changes in coupling strength.”

    We are very grateful to the reviewer for this insight. Verification of the model on synthetic data with known time-varying connectivity would indeed be very useful. We did generate a synthetic dataset to test some of the model performance metrics - i.e. testing its ability to distinguish True Positive (TP) from False Positive (FP) “serial” or “common input” connections (Fig.10A,B). Comparison of dynamic and static weights might indeed help to distinguish TP connections from an artifactual FP connections.

    Generating a large synthetic dataset with known dynamic connections that mimics interactions in cortical networks is, however, a separate and not very trivial task that is beyond the scope of this work. Instead, we designed a model with an architecture where overfitting can be tested in two consecutive stages by comparison with descriptive statistical approaches – CCG and JPSTH. Static stage 1 of the model predicts correlations that are statistically indistinguishable from the CCG results (Fig.2A,B). The dynamic stage 2 of the model produce dynamic weight matrices that faithfully reproduce the cJPSTH (Fig.4D,E). Calculated Pearson correlation coefficients and TOST testing enable optimizing the L2 regularization parameter as shown in Fig.4 – supplement 1 and described in detail in the Methods section. The ability to test results of both stages separately to descriptive statistical results is the main advantage of the chosen model architecture that allow to verify that the model does not overfit and can predict changes in coupling strength at least as good as descriptive statistical approaches (see also our answer above to the Reviewer #1 questions).

    (2) “If the only goal is "smoothing" time-varying CCGs, there are much easier statistical methods to do this (c.f. McKenzie et al. Neuron, 2021. Ren, Wei, Ghanbari, Stevenson. J Neurosci, 2022), and simulations could be useful to illustrate what the model adds beyond smoothing.”

    We are grateful to the reviewer for bringing up these very interesting and relevant references that we added to the discussion section in the paper. Especially of interest is the second one, that is calculating the time-varying CCG weight (“efficacy” in the paper terms) on the same Allen Institute Visual dataset as our work is using. It is indeed an elegant way to extract time-variable coupling strength that is similar to what our model is generating. The major difference of our model from that of Ren et al., as well as from GLMCC and any statistical approaches is that the DyNetCP learns connections of an entire network jointly in one pass, rather than calculating coupling separately for each pair in the dataset without considering the relative influence of other pairs in the network. Hence, our model can infer connections beyond pairwise (see Fig. 11 and corresponding discussion in Methods) while performing the inferences with computational efficiency.

    (3) “Stimulus vs noise correlations. For studying correlations between neurons in sensory systems that are strongly driven by stimuli, it's common to use shuffling over trials to distinguish between stimulus correlations and "noise" correlations or putative synaptic connections. This would be a valuable comparison for Figure 5 to show if these are dynamic stimulus correlations or noise correlations. I would also suggest just plotting the CCGs calculated with a moving window to better illustrate how (and if) the dynamic weights differ from the data.”

    Thank you for this suggestion. Note that for all weight calculations in our model a standard jitter correction procedure of Ref. 33 Harrison et al., Neural Com 2009 is first implemented to mitigate the influences of correlated slow fluctuations (slow “noise”). Please also note that to obtain the results in Fig. 5 we split the 440 total experimental trials for this session (when animal is running, see Table 1) randomly into 352 training and 88 validation trials by selecting 44 training trials from each configuration of contrast or grating angle and 11 for validation. We checked that this random selection, if changed, produced the very same results as shown in Fig.5.

    Comparison of descriptive statistical results of pairwise cJPSTH and the model are shown in Fig. 4D,E. The difference between the two is characterized in Fig.4 – supplement 1 in detail as evidenced by Pearson coefficient and TOST statistical tests.

    Reviewer #2 (Recommendations for the Authors):

    (4) “The method is described as "unsupervised" in the abstract, but most researchers would probably call this "supervised" (the static model, for instance, is logistic regression).”

    The model architecture is composed of two stages to make parameter optimization grounded. While the first stage is regression, the second and the most important stage is not. Therefore, we believe the term “unsupervised” is justified.

    (5) “Introduction - it may be useful to mention that there have been some previous attempts to describe time-varying connectivity from spikes both with probabilistic models: Stevenson and Kording, Neurips (2011), Linderman, Stock, and Adams, Neurips (2014), Robinson, Berger, and Song, Neural Computation (2016), Wei and Stevenson, Neural Comp (2021) ... and with descriptive statistics: Fujisawa et al. Nat Neuroscience (2008), English et al. Neuron (2017), McKenzie et al. Neuron (2021).”

    We are very grateful to both reviewers for bringing up these very interesting and relevant references that we gladly included in the discussions within the Introduction and Discussion sections.

    (6) “In the section "Static connectivity inferred by the DyNetCP from in-vivo recordings is biologically interpretable"... I may have missed it, but how is the "functional delay" calculated? And am I understanding right that for the DyNetCP you are just using [w_i\toj, w_j\toi] in place of the CCG?”

    The functional delay is calculated as a time lag of the maximum (or minimum) in the CCG (or static weight matrix). The static weight that the model is extracting is indeed the wiwj product. We changed the text in this section to better clarify these definitions.

    (7) “P14 typo "sparce spiking" sparse”

    Fixed. Thank you.

    (8) “Suggest rewarding "Extra-laminar interactions reveal formation of neuronal ensembles with both feedforward (e.g., layer 4 to layer 5), and feedback (e.g., layer 5 to layer 4) drives." I'm not sure this method can truly distinguish common input from directed, recurrent cortical effects. Just as an example in Figure 5, it looks like 2->4, 0->4, and 3>2 are 0 lag effects. If you wanted to add the "functional delay" analysis to this laminar result that could support some stronger claims about directionality, though.”

    The time lags for the results of Fig. 5 are indeed small, but, however, quantifiable. Left panel Fig. 5A shows static results with the correlation peaks shifted by 1ms from zero lag.

    (9) “Methods - I think it would be useful to mention how many parameters the full DyNetCP model has.”

    Overall, after the architecture of Fig.1C is established, dynamic weight averaging procedure is selected (Fig.9), and Fourier features are introduced (Fig.10), there is just a few parameters to optimize including L2 regularization (Fig.4 – supplement 1) and loss coefficient  (Fig.1 – figure supplement 1A). Other variables, common for all statistical approaches, include bin sizes in the lag time and in the trial time. Decreasing the bin size will improve time resolution while decreasing the number of spikes in each bin for reliable inference. Therefore, number of spikes threshold and other related thresholds α𝑠 , α𝑤 , α𝑝 as well as λ𝑖λ𝑗, need to be adjusted accordingly (Fig.11) as discussed in detail in the Methods, Section 4. We included this sentence in the text.

    (10) “It may be useful to also mention recent results in mice (Senzai et al. Neuron, 2019) and monkeys (Trepka...Moore. eLife, 2022) that are assessing similar laminar structures with CCGs.”

    Thank you for pointing out these very interesting references. We added a paragraph in “Dynamic connectivity in VISp primary visual area” section comparing our results with these findings. In short, we observed that connections are distributed across the cortical depth with nearly the same maximum weights (Fig.7A) that is inconsistent with observed in Trepka et al, 2022 greatly diminished static connection efficacy within <200µm from the source. It is consistent, however, with the work of Senzai et al, 2019 that reveals much stronger long-distance correlations between layer 2/3 and layer 5 during waking in comparison to sleep states. In both cases these observations represent static connections averaged over a trial time, while the results presented in Video 3 and Fig.7A show strong temporal modulation of the connection strength between all the layers during the stimulus presentation. Therefore, our results demonstrate that tracking dynamic connectivity patterns in local cortical networks can be invaluable in assessing circuitlevel dynamic network organization.

  5. eLife assessment

    This study presents a useful method for using multi-electrode spike recordings to track the time-varying functional connectivity between neurons. However, the evidence is incomplete: a demonstration of the utility of the method relative to conventional approaches is needed. If such a demonstration is made, this could be a tool for gaining insight into circuit structure.

  6. Reviewer #1 (Public Review):

    Summary:

    This work proposes a new method, DyNetCP, for inferring dynamic functional connectivity between neurons from spike data. DyNetCP is based on a neural network model with a two-stage model architecture of static and dynamic functional connectivity.

    This work evaluates the accuracy of the synaptic connectivity inference and shows that DyNetCP can infer the excitatory synaptic connectivity more accurately than a state-of-the-art model (GLMCC) by analyzing the simulated spike trains. Furthermore, it is shown that the inference results obtained by DyNetCP from large-scale in-vivo recordings are similar to the results obtained by the existing methods (jitter-corrected CCG and JPSTH). Finally, this work investigates the dynamic connectivity in the primary visual area VISp and in the visual areas using DyNetCP.

    Strengths:

    The strength of the paper is that it proposes a method to extract the dynamics of functional connectivity from spike trains of multiple neurons. The method is potentially useful for analyzing parallel spike trains in general, as there are only a few methods (e.g. Aertsen et al., J. Neurophysiol., 1989, Shimazaki et al., PLoS Comput Biol 2012) that infer the dynamic connectivity from spikes. Furthermore, the approach of DyNetCP is different from the existing methods: while the proposed method is based on the neural network, the previous methods are based on either the descriptive statistics (JSPH) or the Ising model.

    Weaknesses:

    Although the paper proposes a new method, DyNetCP, for inferring the dynamic functional connectivity, its strengths are neither clear nor directly demonstrated in this paper. That is, insufficient analyses are performed to support the usefulness of DyNetCP.

    First, this paper attempts to show the superiority of DyNetCP by comparing the performance of synaptic connectivity inference with GLMCC (Figure 2). However, the improvement in the synaptic connectivity inference does not seem to be convincing. While this paper compares the performance of DyNetCP with a state-of-the-art method (GLMCC), there are several problems with the comparison. For example:

    (1) This paper focused only on excitatory connections (i.e., ignoring inhibitory neurons).

    (2) This paper does not compare with existing neural network-based methods (e.g., CoNNECT: Endo et al. Sci. Rep. 2021; Deep learning: Donner et al. bioRxiv, 2024).

    (3) Only a population of neurons generated from the Hodgkin-Huxley model was evaluated.

    Thus, the results in this paper are not sufficient to conclude the superiority of DyNetCP in the estimation of synaptic connections. In addition, this paper compares the proposed method with the standard statistical methods Jitter-corrected CCG (Figure 3) and JPSTH (Figure 4). Unfortunately, these results do not show the superiority of the proposed method. It only shows that the results obtained by the proposed method are consistent with those obtained by the existing methods (CCG or JPSTH). This paper also compares the proposed method with standard statistical methods, such as jitter-corrected CCG (Figure 3) and JPSTH (Figure 4). It only shows that the results obtained by the proposed method are consistent with those obtained by the existing methods (CCG or JPSTH), which does not show the superiority of the proposed method.

    In summary, although DyNetCP has the potential to infer synaptic connections more accurately than existing methods, the paper does not provide sufficient analysis to make this claim. It is also unclear whether the proposed method is superior to the existing methods for estimating functional connectivity, such as jitter-corrected CCG and JPSTH. Thus, the strength of DyNetCP is unclear.

  7. Reviewer #2 (Public Review):

    Summary:

    Here the authors describe a model for tracking time-varying coupling between neurons from multi-electrode spike recordings. Their approach extends a GLM with static coupling between neurons to include dynamic weights, learned by a long-short-term-memory (LSTM) model. Each connection has a corresponding LSTM embedding and is read out by a multi-layer perceptron to predict the time-varying weight.

    Strengths:

    This is an interesting approach to an open problem in neural data analysis. I think, in general, the method would be interesting to computational neuroscientists.

    Weaknesses:

    It is somewhat difficult to interpret what the model is doing. I think it would be worthwhile to add some additional results that make it more clear what types of patterns are being described and how.

    Major Issues:

    Simulation for dynamic connectivity. It certainly seems doable to simulate a recurrent spiking network whose weights change over time, and I think this would be a worthwhile validation for this DyNetCP model. In particular, I think it would be valuable to understand how much the model overfits, and how accurately it can track known changes in coupling strength. If the only goal is "smoothing" time-varying CCGs, there are much easier statistical methods to do this (c.f. McKenzie et al. Neuron, 2021. Ren, Wei, Ghanbari, Stevenson. J Neurosci, 2022), and simulations could be useful to illustrate what the model adds beyond smoothing.

    Stimulus vs noise correlations. For studying correlations between neurons in sensory systems that are strongly driven by stimuli, it's common to use shuffling over trials to distinguish between stimulus correlations and "noise" correlations or putative synaptic connections. This would be a valuable comparison for Figure 5 to show if these are dynamic stimulus correlations or noise correlations. I would also suggest just plotting the CCGs calculated with a moving window to better illustrate how (and if) the dynamic weights differ from the data.