Designing optimal perturbation inputs for system identification in neuroscience

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife Assessment

    The authors establish solid theoretical principles for designing brain perturbations under the assumption that brain activity evolves under a linear model. By prioritizing low-variance components, resonant frequencies, and hub nodes, this framework provides an important foundation for optimizing information gain, neural state classification, and the control of neural dynamics. However, the lack of investigation of model mismatch makes the study incomplete.

This article has been Reviewed by the following groups

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Abstract

Investigating the dynamics of neural networks, which are governed by connectivity between neurons, is a fundamental challenge in neuroscience. Because passive (spontaneous) activity provides only limited information for estimating connectivity, perturbation-based approaches are widely applied in neuroscience, as they can evoke underlying hidden dynamics. However, the characteristics of such perturbations have typically been designed based on empirical or biological intuition. To enable more accurate estimation of connectivity, we propose a data-driven and theoretically grounded framework for optimally designing perturbation inputs, based on formulating the neural model as a control system. The core theoretical insight underlying our approach is that neural signals observed in the passive state lack sufficient latent information, which leads to failures in the system identification. Perturbations reveal these hidden dynamics and lead to improved estimation. Guided by these insights, we derive a theoretical basis for optimizing perturbation inputs that minimize estimation errors in neural system identification. Building upon this, we further explore the relationship of this theory with stimulation patterns commonly used in neuroscience, such as frequency, impulse, and step inputs. We demonstrate the effectiveness of this framework for neuroscience through simulations grounded in experimental paradigms such as neural state classification and optimal control of neural states. Our theoretical analysis, together with multiple simulations, consistently shows that perturbations designed according to our framework achieve substantially more accurate system identification compared to the conventional, intuition-based inputs. This study provides a theoretical foundation for designing perturbation inputs to achieve accurate estimation of neural dynamics. This, in turn, enables reliable discrimination of neural states such as levels of consciousness and pathological conditions, and facilitates precise control of their transitions toward recovery from abnormal states.

Article activity feed

  1. eLife Assessment

    The authors establish solid theoretical principles for designing brain perturbations under the assumption that brain activity evolves under a linear model. By prioritizing low-variance components, resonant frequencies, and hub nodes, this framework provides an important foundation for optimizing information gain, neural state classification, and the control of neural dynamics. However, the lack of investigation of model mismatch makes the study incomplete.

  2. Joint Public Review:

    Summary:

    Inferring so-called "functional connectivity" between neurons or groups of neurons is important both for validating models and for inferring brain state. Under the assumption that brain dynamics is linear, the authors show that the error in estimating functional connectivity depends only on the eigenvalues of the covariance matrix of the observed data, and it is the small eigenvalues -corresponding to directions in which the variance of the brain activity is low - that lead to large estimation errors. Based on this, the authors show that to achieve low estimation error, it's important to excite the resonant frequencies and perturb well-connected hubs. The authors propose a practical iterative approach to estimate the functional connectivity and demonstrate faster convergence to the optimal estimate compared to passive observation.

    Strengths:

    The main contribution of the study is the derivation of an explicit expression for the error in functional connectivity that depends only on the covariance matrix of the observed data. If valid, this result can have a profound impact on the field. The study also motivates the current shift to closed-loop experiments by demonstrating the effectiveness of active learning in the system using perturbation, in comparison to passive estimation from resting-state activity. Finally, the relative simplicity of the model makes its practical applications straightforward, as the authors illustrate in the context of brain state classification and neural control.

    Weaknesses:

    The derivation of the main error term misses some important steps, which complicates peer review at this stage. In particular, factorisation of the covariance into noise and the inverse of the observation covariance matrix needs a more thorough justification. The cited sources do not contain the derivation for a noise term with full covariance, which is essential for deriving this error term.

    The practical recommendation at the end of the paper also requires clearer guidance on how the design perturbations are constructed, and how many times and for how long the system is stimulated in each iteration of the experiment.

    Finally, there is no analysis of model mis-specification. In particular, the true dynamics are unlikely to be linear; the noise is unlikely to be either Gaussian or uncorrelated across time; and the B matrix is unlikely to be known perfectly. We're not suggesting that the authors consider a more complex model, but it's important to know how sensitive their method is to model mismatch. If nothing can be done analytically, then simulations would at least provide some kind of guide.