Cortical activity during naturalistic music listening reflects short-range predictions based on long-term experience
Curation statements for this article:-
Curated by eLife
eLife assessment
This study models the predictions a listener makes in music in two ways: how different model algorithms compare in their performance at predicting the upcoming notes in a melody, and how well they predict listeners' brain responses to these notes. The study will be valuable to the field as it implements three contemporary models of music prediction. In a set of solid analyses, the authors find that musical melodies are best predicted by models taking into account long-term experience of musical melodies, whereas brain responses are best predicted by applying these models to only a few most recent notes.
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (eLife)
Abstract
Expectations shape our experience of music. However, the internal model upon which listeners form melodic expectations is still debated. Do expectations stem from Gestalt-like principles or statistical learning? If the latter, does long-term experience play an important role, or are short-term regularities sufficient? And finally, what length of context informs contextual expectations? To answer these questions, we presented human listeners with diverse naturalistic compositions from Western classical music, while recording neural activity using MEG. We quantified note-level melodic surprise and uncertainty using various computational models of music, including a state-of-the-art transformer neural network. A time-resolved regression analysis revealed that neural activity over fronto-temporal sensors tracked melodic surprise particularly around 200ms and 300–500ms after note onset. This neural surprise response was dissociated from sensory-acoustic and adaptation effects. Neural surprise was best predicted by computational models that incorporated long-term statistical learning—rather than by simple, Gestalt-like principles. Yet, intriguingly, the surprise reflected primarily short-range musical contexts of less than ten notes. We present a full replication of our novel MEG results in an openly available EEG dataset. Together, these results elucidate the internal model that shapes melodic predictions during naturalistic music listening.
Article activity feed
-
-
Author Response
Reviewer #1 (Public Review):
This paper describes the results of a MEG study where participants listened to classical MIDI music. The authors then use lagged linear regression (with 5-fold cross-validation) to predict the response of the MEG signal using (1) note onsets (2) several additional acoustic features (3) a measure of note surprise computed from one of several models. The authors find that the surprise regressors predict additional variance above and beyond that already predicted by the other note onset and acoustic features (the "baseline" model), which serves as a replication of a recent study by Di Liberto.
They compute note surprisal using four models (1) a hand-crafted Bayesian model designed to reflect some of the dominant statistical properties of Western music (Temperley) (2) an ngram model trained …
Author Response
Reviewer #1 (Public Review):
This paper describes the results of a MEG study where participants listened to classical MIDI music. The authors then use lagged linear regression (with 5-fold cross-validation) to predict the response of the MEG signal using (1) note onsets (2) several additional acoustic features (3) a measure of note surprise computed from one of several models. The authors find that the surprise regressors predict additional variance above and beyond that already predicted by the other note onset and acoustic features (the "baseline" model), which serves as a replication of a recent study by Di Liberto.
They compute note surprisal using four models (1) a hand-crafted Bayesian model designed to reflect some of the dominant statistical properties of Western music (Temperley) (2) an ngram model trained on one musical piece (IDyOM stm) (3) an n-gram model trained on a much larger corpus (IDyOM ltm) (4) a transformer DNN trained on a mix of polyphonic and monophonic music (MT). For each model, they train the model using varying amounts of context.
They find that the transformer model (MT) and long-term n-gram model (IDyOM stm) give the best neural prediction accuracy, both of which give ~3% improvement in predicted correlation values relative to their baseline model. In addition, they find that for all models, the prediction scores are maximal for contexts of ~2-7 notes. These neural results do not appear to reflect the overall accuracy of the models tested since the short-term n-gram model outperforms the long-term n-gram model and the music transformer's accuracy improves substantially with additional context beyond 7 notes. The authors replicate all these findings in a separate EEG experiment from the Di Liberto paper.
Overall, this is a clean, nicely-conducted study. However, the conclusions do not follow from the results for two main reasons:
- Different features of natural stimuli are almost always correlated with each other to some extent, and as a consequence, a feature (e.g., surprise) can predict the neural response even if it doesn't drive that response. The standard approach to dealing with this problem, taken here, is to test if a feature improves the prediction accuracy of a model above and beyond that of a baseline model (using cross-validation to avoid over-fitting). If the feature improves prediction accuracy, then one can conclude that the feature contributes additional, unique variance. However, there are two key problems: (1) the space of possible features to control for is vast, and there will almost always be uncontrolled-for features (2) the relationship between the relevant control features and the neural response could be nonlinear. As a consequence, if some new feature (here surprise) contributes a little bit of additional variance, this could easily reflect additional un-controlled features or some nonlinear relationship that was not captured by the linear model. This problem becomes more acute the smaller the effect size since even a small inaccuracy in the control model could explain the resulting finding. This problem is not specific to this study but is a problem nonetheless.
We understand the reviewer’s point and agree that it indeed applies not exclusively to the present study, but likely to many studies in this field and beyond. We disagree, however, that it constitutes a problem per se. We maintain that the approach of adding a feature, observing that it increases crossvalidated prediction performance, and concluding that therefore the feature is relevant, is a valid one. Indeed, it is possible and even likely that not all relevant features (or non-linear transformations thereof) will be present in the control/baseline model. If a to-be-tested feature increases predictive performance and therefore explains relevant variance, then that means that part of what drives the neural response is non-trivially related to the to-be-tested feature. The true underlying relationship may not be linear, and later work may uncover more complex relationships that subsume the earlier discovery, but the original conclusion remains justified.
Importantly, we wish to emphasize that the key conclusions of our study primarily rest upon comparisons between regression models that are by design equally complex, such as surpriseaccording-to-MT versus surprise-according-to-IDyOM and comparisons across different context lengths. We maintain that the comparison with the Baseline model is also important, but even taking the reviewer’s worry here into account, the comparison between different equally-complex regression models should not suffer from it to the same extent as a model-versus-baseline comparison.
- The authors make a distinction between "Gestalt-like principles" and "statistical learning" but they never define was is meant by this distinction. The Temperley model encodes a variety of important statistics of Western music, including statistics such as keys that are unlikely to reflect generic Gestalt principles. The Temperley model builds in some additional structure such as the notion of a key, which the n-gram and transformer models must learn from scratch. In general, the models being compared differ in so many ways that it is hard to conclude much about what is driving the observed differences in prediction accuracy, particularly given the small effect sizes. The context manipulation is more controlled, and the fact that neural prediction accuracy dissociates from the model performance is potentially interesting. However, I am not confident that the authors have a good neural index of surprise for the reasons described above, and this limits the conclusions that can be drawn from this manipulation.
First of all, we would like to apologize for any unclarity regarding the distinction between Gestalt-like and statistical models. We take Gestalt-like models to be those that explain music perception as following a restricted set of rules, such as that adjacent notes tend to be close in pitch. In contrast, as the reviewer correctly points out, statistical learning models have no such a priori principles and must learn similar or other principles from scratch. Importantly, the distinction between these two classes of models is not one we make for the first time in the context of music perception. Gestalt-like models have a long tradition in musicology and the study of music cognition dating back to (Meyer, 1957). The Implication-Realization model developed by Eugene Narmour (Narmour, 1990, 1992; Schellenberg, 1997) is another example for a rule-based theory of music listening, which has influenced the model by David Temperley, which we applied as the most recently influential Gestalt-model of melodic expectations in the present study. Concurrently to the development of Gestalt-like models, a second strand of research framed music listening in light of information theory and statistical learning (Bharucha, 1987; Cohen, 1962; Conklin & Witten, 1995; Pearce & Wiggins, 2012). Previous work has made the same distinction and compared models of music along the same axis (Krumhansl, 2015; Morgan et al., 2019a; Temperley, 2014). We have updated the manuscript to elaborate on this distinction and highlight that it is not uncommon.
Second, we emphasize that we compare the models directly in terms of their predictive performance both of upcoming musical notes and of neural responses. This predictive performance is not dependent on the internal details of any particular model; e.g. in principle it would be possible to include a “human expert” model where we ask professional composers to predict upcoming notes given a previous context. Because of this independence of the relevant comparison metric on model details, we believe comparing the models is justified. Again, this is in line with previously published work in music (Morgan et al., 2019a), language, (Heilbron et al., 2022; Schmitt et al., 2021; Wilcox et al., 2020), and other domains (Planton et al., 2021). Such work compares different models in how well they align with human statistical expectations by assessing how well different models explain predictability/surprise effects in behavioral and/or brain responses.
Third, regarding the doubts on the neural index of surprise used: we respond to this concern below, after reviewer 1’s first point to which the present comment refers (the referred-to comment was not included in the “essential revisions” here).
Reviewer #2 (Public Review):
This manuscript focuses on the basis of musical expectations/predictions, both in terms of the basis of the rules by which these are generated, and the neural signatures of surprise elicited by violation of these predictions.
Expectation generation models directly compared were gestalt-like, n-gram, and a recentlydeveloped Music Transformer model. Both shorter and longer temporal windows of sampling were also compared, with striking differences in performance between models.
Surprise (defined as per convention as negative log prior probability of the current note) responses were assessed in the form of evoked response time series, recorded separately with both MEG and EEG (the latter in a previously recorded freely available dataset). M/EEG data correlated best with surprise derived from musical models that emphasised long-term learned experiences over short-term statistical regularities for rule learning. Conversely, the best performance was obtained when models were applied to only the most recent few notes, rather than longer stimulus histories.
Uncertainty was also computed as an independent variable, defined as entropy, and equivalent to the expected surprise of the upcoming note (sum of the probability of each value times surprise associated with that note value). Uncertainty did not improve predictive performance on M/EEG data, so was judged not to have distinct neural correlates in this study.
The paradigm used was listening to naturalistic musical melodies.
A time-resolved multiple regression analysis was used, incorporating a number of binary and continuous variables to capture note onsets, contextual factors, and outlier events, in addition to the statistical regressors of interest derived from the compared models.
Regression data were subjected to non-parametric spatiotemporal cluster analysis, with weights from significant clusters projected into scalp space as planar gradiometers and into source space as two equivalent current dipoles per cluster
General comments:
The research questions are sound, with a clear precedent of similar positive findings, but numerous unanswered questions and unexplored avenues
I think there are at least two good reasons to study this kind of statistical response with music: firstly that it is relevant to the music itself; secondly, because the statistical rules of music are at least partially separable from lower-level processes such as neural adaptation.
Whilst some of the underlying theory and implementation of the musical theory are beyond my expertise, the choice, implementation, fitting, and comparison of statistical models of music seem robust and meticulous.
The MEG and EEG data processing is also in line with accepted best practice and meticulously performed.
The manuscript is very well-written and free from grammatical or other minor errors.
The discussion strikes a brilliant balance of clearly laying out the interim conclusions and advances, whilst being open about caveats and limitations.
Overall, the manuscript presents a range of highly interesting findings which will appeal to a broad audience, based on rigorous experimental work, meticulous analysis, and fair and clear reporting.
We thank the reviewer for their detailed and positive evaluation of our manuscript.
Reviewer #3 (Public Review):
The authors compare the ability of several models of musical predictions in their accuracy and in their ability to explain neural data from MEG and EEG experiments. The results allow both methodological advancements by introducing models that represent advancements over the current state of the art and theoretical advancements to infer the effects of long and shortterm exposure on prediction. The results are clear and the interpretation is for the most part well reasoned.
At the same time, there are important aspects to consider. First, the authors may overstate the advancement of the Music Transformer with the present stimuli, as its increase in performance requires a considerably longer context than the other models. Secondly, the Baseline model, to which the other models are compared, does not contain any pitch information on which these models operate. As such, it's unclear if the advancements of these models come from being based on new information or the operations it performs on this information as claimed. Lastly, the source analysis yields some surprising results that don't fit with previous literature. For example, the authors show that onsets to notes are encoded in Broca's area, whereas it should be expected more likely in the primary auditory cortex. While this issue is not discussed by the authors, it may put the rest of the source analysis into question.
While these issues are serious ones, the work still makes important advancements for the field and I commend the authors on a remarkably clear and straightforward text advancing the modeling of predictions in continuous sequences.
We thank the reviewer for their compliments.
-
eLife assessment
This study models the predictions a listener makes in music in two ways: how different model algorithms compare in their performance at predicting the upcoming notes in a melody, and how well they predict listeners' brain responses to these notes. The study will be valuable to the field as it implements three contemporary models of music prediction. In a set of solid analyses, the authors find that musical melodies are best predicted by models taking into account long-term experience of musical melodies, whereas brain responses are best predicted by applying these models to only a few most recent notes.
-
Reviewer #1 (Public Review):
This paper describes the results of a MEG study where participants listened to classical MIDI music. The authors then use lagged linear regression (with 5-fold cross-validation) to predict the response of the MEG signal using (1) note onsets (2) several additional acoustic features (3) a measure of note surprise computed from one of several models. The authors find that the surprise regressors predict additional variance above and beyond that already predicted by the other note onset and acoustic features (the "baseline" model), which serves as a replication of a recent study by Di Liberto.
They compute note surprisal using four models (1) a hand-crafted Bayesian model designed to reflect some of the dominant statistical properties of Western music (Temperley) (2) an n-gram model trained on one musical piece …
Reviewer #1 (Public Review):
This paper describes the results of a MEG study where participants listened to classical MIDI music. The authors then use lagged linear regression (with 5-fold cross-validation) to predict the response of the MEG signal using (1) note onsets (2) several additional acoustic features (3) a measure of note surprise computed from one of several models. The authors find that the surprise regressors predict additional variance above and beyond that already predicted by the other note onset and acoustic features (the "baseline" model), which serves as a replication of a recent study by Di Liberto.
They compute note surprisal using four models (1) a hand-crafted Bayesian model designed to reflect some of the dominant statistical properties of Western music (Temperley) (2) an n-gram model trained on one musical piece (IDyOM stm) (3) an n-gram model trained on a much larger corpus (IDyOM ltm) (4) a transformer DNN trained on a mix of polyphonic and monophonic music (MT). For each model, they train the model using varying amounts of context.
They find that the transformer model (MT) and long-term n-gram model (IDyOM stm) give the best neural prediction accuracy, both of which give ~3% improvement in predicted correlation values relative to their baseline model. In addition, they find that for all models, the prediction scores are maximal for contexts of ~2-7 notes. These neural results do not appear to reflect the overall accuracy of the models tested since the short-term n-gram model outperforms the long-term n-gram model and the music transformer's accuracy improves substantially with additional context beyond 7 notes. The authors replicate all these findings in a separate EEG experiment from the Di Liberto paper.
Overall, this is a clean, nicely-conducted study. However, the conclusions do not follow from the results for two main reasons:
1. Different features of natural stimuli are almost always correlated with each other to some extent, and as a consequence, a feature (e.g., surprise) can predict the neural response even if it doesn't drive that response. The standard approach to dealing with this problem, taken here, is to test if a feature improves the prediction accuracy of a model above and beyond that of a baseline model (using cross-validation to avoid over-fitting). If the feature improves prediction accuracy, then one can conclude that the feature contributes additional, unique variance. However, there are two key problems: (1) the space of possible features to control for is vast, and there will almost always be uncontrolled-for features (2) the relationship between the relevant control features and the neural response could be nonlinear. As a consequence, if some new feature (here surprise) contributes a little bit of additional variance, this could easily reflect additional un-controlled features or some nonlinear relationship that was not captured by the linear model. This problem becomes more acute the smaller the effect size since even a small inaccuracy in the control model could explain the resulting finding. This problem is not specific to this study but is a problem nonetheless.
2. The authors make a distinction between "Gestalt-like principles" and "statistical learning" but they never define was is meant by this distinction. The Temperley model encodes a variety of important statistics of Western music, including statistics such as keys that are unlikely to reflect generic Gestalt principles. The Temperley model builds in some additional structure such as the notion of a key, which the n-gram and transformer models must learn from scratch. In general, the models being compared differ in so many ways that it is hard to conclude much about what is driving the observed differences in prediction accuracy, particularly given the small effect sizes. The context manipulation is more controlled, and the fact that neural prediction accuracy dissociates from the model performance is potentially interesting. However, I am not confident that the authors have a good neural index of surprise for the reasons described above, and this limits the conclusions that can be drawn from this manipulation.
-
Reviewer #2 (Public Review):
This manuscript focuses on the basis of musical expectations/predictions, both in terms of the basis of the rules by which these are generated, and the neural signatures of surprise elicited by violation of these predictions.
Expectation generation models directly compared were gestalt-like, n-gram, and a recently-developed Music Transformer model. Both shorter and longer temporal windows of sampling were also compared, with striking differences in performance between models.
Surprise (defined as per convention as negative log prior probability of the current note) responses were assessed in the form of evoked response time series, recorded separately with both MEG and EEG (the latter in a previously recorded freely available dataset). M/EEG data correlated best with surprise derived from musical models that …
Reviewer #2 (Public Review):
This manuscript focuses on the basis of musical expectations/predictions, both in terms of the basis of the rules by which these are generated, and the neural signatures of surprise elicited by violation of these predictions.
Expectation generation models directly compared were gestalt-like, n-gram, and a recently-developed Music Transformer model. Both shorter and longer temporal windows of sampling were also compared, with striking differences in performance between models.
Surprise (defined as per convention as negative log prior probability of the current note) responses were assessed in the form of evoked response time series, recorded separately with both MEG and EEG (the latter in a previously recorded freely available dataset). M/EEG data correlated best with surprise derived from musical models that emphasised long-term learned experiences over short-term statistical regularities for rule learning. Conversely, the best performance was obtained when models were applied to only the most recent few notes, rather than longer stimulus histories.
Uncertainty was also computed as an independent variable, defined as entropy, and equivalent to the expected surprise of the upcoming note (sum of the probability of each value times surprise associated with that note value). Uncertainty did not improve predictive performance on M/EEG data, so was judged not to have distinct neural correlates in this study.
The paradigm used was listening to naturalistic musical melodies.
A time-resolved multiple regression analysis was used, incorporating a number of binary and continuous variables to capture note onsets, contextual factors, and outlier events, in addition to the statistical regressors of interest derived from the compared models.
Regression data were subjected to non-parametric spatiotemporal cluster analysis, with weights from significant clusters projected into scalp space as planar gradiometers and into source space as two equivalent current dipoles per cluster
General comments:
The research questions are sound, with a clear precedent of similar positive findings, but numerous unanswered questions and unexplored avenues
I think there are at least two good reasons to study this kind of statistical response with music: firstly that it is relevant to the music itself; secondly, because the statistical rules of music are at least partially separable from lower-level processes such as neural adaptation.
Whilst some of the underlying theory and implementation of the musical theory are beyond my expertise, the choice, implementation, fitting, and comparison of statistical models of music seem robust and meticulous.
The MEG and EEG data processing is also in line with accepted best practice and meticulously performed.
The manuscript is very well-written and free from grammatical or other minor errors.
The discussion strikes a brilliant balance of clearly laying out the interim conclusions and advances, whilst being open about caveats and limitations.
Overall, the manuscript presents a range of highly interesting findings which will appeal to a broad audience, based on rigorous experimental work, meticulous analysis, and fair and clear reporting.
-
Reviewer #3 (Public Review):
The authors compare the ability of several models of musical predictions in their accuracy and in their ability to explain neural data from MEG and EEG experiments. The results allow both methodological advancements by introducing models that represent advancements over the current state of the art and theoretical advancements to infer the effects of long and short-term exposure on prediction. The results are clear and the interpretation is for the most part well reasoned.
At the same time, there are important aspects to consider. First, the authors may overstate the advancement of the Music Transformer with the present stimuli, as its increase in performance requires a considerably longer context than the other models. Secondly, the Baseline model, to which the other models are compared, does not contain …
Reviewer #3 (Public Review):
The authors compare the ability of several models of musical predictions in their accuracy and in their ability to explain neural data from MEG and EEG experiments. The results allow both methodological advancements by introducing models that represent advancements over the current state of the art and theoretical advancements to infer the effects of long and short-term exposure on prediction. The results are clear and the interpretation is for the most part well reasoned.
At the same time, there are important aspects to consider. First, the authors may overstate the advancement of the Music Transformer with the present stimuli, as its increase in performance requires a considerably longer context than the other models. Secondly, the Baseline model, to which the other models are compared, does not contain any pitch information on which these models operate. As such, it's unclear if the advancements of these models come from being based on new information or the operations it performs on this information as claimed. Lastly, the source analysis yields some surprising results that don't fit with previous literature. For example, the authors show that onsets to notes are encoded in Broca's area, whereas it should be expected more likely in the primary auditory cortex. While this issue is not discussed by the authors, it may put the rest of the source analysis into question.
While these issues are serious ones, the work still makes important advancements for the field and I commend the authors on a remarkably clear and straightforward text advancing the modeling of predictions in continuous sequences.
-