Eelbrain, a Python toolkit for time-continuous analysis with temporal response functions

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    Brodbeck et al. offer a timely and important contribution to how neural signals in response to continuous temporal modulations (as seen in speech and language processing) can be modelled effectively using temporal response functions. They offer a convincing new approach that includes a novel application of a boosting algorithm in addition to an accessible and didactically useful toolbox for analysis. With further comparison to existing toolboxes, or a more extensive comparison of boosting and ridge regression via simulation, this work will have a compelling impact on methods in speech and language neuroscience, as well as in cognitive neuroscience more broadly.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdependent temporal structures. Time-lagged regression using temporal response functions (TRFs ) has recently emerged as a promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here, we introduce the Eelbrain Python toolkit, which makes this kind of analysis easy and accessible. We demonstrate its use, using continuous speech as a sample paradigm, with a freely available EEG dataset of audiobook listening. A companion GitHub repository provides the complete source code for the analysis, from raw data to group-level statistics. More generally, we advocate a hypothesis-driven approach in which the experimenter specifies a hierarchy of time-continuous representations that are hypothesized to have contributed to brain responses, and uses those as predictor variables for the electrophysiological signal. This is analogous to a multiple regression problem, but with the addition of a time dimension. TRF analysis decomposes the brain signal into distinct responses associated with the different predictor variables by estimating a multivariate TRF (mTRF), quantifying the influence of each predictor on brain responses as a function of time(-lags). This allows asking two questions about the predictor variables: (1) Is there a significant neural representation corresponding to this predictor variable? And if so, (2) what are the temporal characteristics of the neural response associated with it? Thus, different predictor variables can be systematically combined and evaluated to jointly model neural processing at multiple hierarchical levels. We discuss applications of this approach, including the potential for linking algorithmic/representational theories at different cognitive levels to brain responses through computational models with appropriate linking hypotheses.

Article activity feed

  1. eLife assessment

    Brodbeck et al. offer a timely and important contribution to how neural signals in response to continuous temporal modulations (as seen in speech and language processing) can be modelled effectively using temporal response functions. They offer a convincing new approach that includes a novel application of a boosting algorithm in addition to an accessible and didactically useful toolbox for analysis. With further comparison to existing toolboxes, or a more extensive comparison of boosting and ridge regression via simulation, this work will have a compelling impact on methods in speech and language neuroscience, as well as in cognitive neuroscience more broadly.

  2. Reviewer #1 (Public Review):

    The paper starts with a general explanation of the method behind temporal response functions (TRFs), an analysis technique for M/EEG data that has led to many new findings in the last few years. The authors touch upon convolution and show how a linear model can be used to model non-linear responses. The methods section provides a practical introduction to the TRF, in which advice on general analysis steps - such as EEG preprocessing and centering the predictor variables - is intertwined with explanations of the use of the toolbox Eelbrain. The results section outlines how to use the outcome of a TRF model to answer (cognitive) neuroscientific questions and provides a comparison between ERPs and TRFs. The discussion section touches upon a couple of considerations, the most important one being a discussion of the Sparsity prior/Boosting algorithm.

    A first great merit of this paper is that it manages to clearly explain both the analysis and the important decisions a researcher needs to make in just a few pages. When following the steps outlined in (in particular) the methods section, the researcher will know how to implement a TRF model using Eelbrain, as well as have a general idea about the decisions that one needs to make in the process. Furthermore, the explicit comparison between ERPs and TRFs will help many understand what TRFs are, and in which ways they allow for more fine-grained analysis of the data than ERPs. For these reasons, this work is a suitable starting point for anyone who wants to get started with TRFs, and a good addition to the existing set of papers on this topic, such as Crosse, Di Liberto, Bednar, and Lalor (2016) and Sassenhagen (2019).

    An important contribution of this work is the implementation of the Boosting algorithm. Although it is yet to be determined whether this algorithm creates better models of the neural data than previous implementations of the TRF, the authors provide good arguments for the suitability of this algorithm for the analysis of neural time-series data.

    On the practical side, the tutorial analyses are well-designed for the target audience, with interpretable questions and contrast relevant to the field of cognitive neuroscience. The corresponding scripts are clear and well-commented. Finally, the implementation of this method in Python will be greatly appreciated - especially by those who do not have access to a MATLAB license.

    All in all, this is a highly didactic paper that will help many researchers get started with temporal response functions both theoretically (to understand the method) and practically (to work with the toolbox). As such, this work has the potential to be of great importance in the field of cognitive neuroscience.

  3. Reviewer #2 (Public Review):

    The current manuscript presents a new toolbox to apply temporal response functions (TRFs) usable in python. TRFs are becoming more widely used and providing an accessible toolbox for a wider audience is very important and should be promoted. Overall, it also seems that the code accompanying the manuscript provides all the steps to do the analysis and could potentially be very useful. However, in the current version, the toolbox relies on one single way to solve the TRF estimation problem, which is the boosting algorithm. Providing a single algorithm makes it difficult to compare results from this toolbox with outcomes of other toolboxes which rely on different methods to solve the regression. The user is forced to work with this choice and is not provided other options (or easy ways to implement new options). Additionally, it seems unclear whether the toolbox is fully able to provide the means to generate predictors that are typically used in a TRF analysis. The github code provided for generating the predictors does not seem to be fully integrated with eelbrain and relies on code in the trftools toolbox, which contains code that the authors deem not yet stable enough to be released. Finally, the overall logic and idea behind the toolbox could have been explained better to make it more accessible to use.