Parallel processing in speech perception with local and global representations of linguistic context

Curation statements for this article:
  • Curated by eLife

    eLife logo

    Evaluation Summary:

    Brodbeck and colleagues make a strong contribution to the field of neurolinguistics by asking whether speech comprehension uses local (e.g., sublexical) or global (e.g., sentences) contextual probabilities. To tackle this, they recorded participants with magnetoencephalography while they listened to a story. The authors assessed which of three possible speech models best explained brain activity using state-of-the-art analyses and information-theoretic measures. The authors report strong and valuable evidence for both local and global contextual analyses supporting the coexistence of both hierarchical and parallel speech processing in the human brain.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #2 agreed to share their name with the authors.)

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence-level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictive models in nonidentical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition.

Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    The key question addressed of this MEG study is whether speech is represented singly or multiplexed in the human brain in the linguistic hierarchy. The authors used state-of-the-art analyses (multivariate Temporal Response Functions) and probablilistic information-theoretic measures (entropy, surprisal) to test distinct contextual speech processing models at three hierarchical levels. The authors report evidence for the coexistence of local and global predictive speech processing in the linguistic hierarchy.

    The work uses time resolved neuroimaging with state-of-the-art analyses and cognitive (here, linguistic) modeling. The study is very well conducted and draws from very different fields of knowledge in convincing ways. I see one limitation of the current study in that the authors focused on phase-locked responses, and I hope future work could extend to induced activity.

    Overall, the flow in the MS could be streamlined. Some smoothing in the introduction would be helpful to extract the main key messages you wish to convey.

    For instance, in the abstract:

    -Can you explain the two views in a simpler way in the abstract and to a non-linguistic audience? Do you mean to say that classic psycholinguistic models tend to follow a strict hierarchically integration (analysis only) but an alternative model is hierarchically inferential (analysis by synthesis)?

    -Indicate early on in abstract or intro where the audience is being led with a concise message on how you address the main question. For instance:

    To contrast our working hypotheses A and B, we used a novel information-theoretic modeling approach and associated measures (entropy, surprisal), which make clear predictions on the latency of brain activity in responses to speech at three hierarchal contextual levels (sublexical, word and sentence).

    We have revised the Abstract and Introduction to reduce the amount of terminology and add additional explanations. Wherever possible, we now use general terms (“bottom up”, “predictions”, “context”, …) instead of terms associated with specific theories. We hope we found a balance between improving accessibility and retaining the qualities seen by Reviewer 2, who thought the Introduction was clearly written and well connected to the psycholinguistics literature.

    All the models we compare are compatible with an analysis by synthesis approach, as long as the generative models are understood to entail making probabilistic predictions about future input. The generative models in analysis by synthesis, then, are one way in which “to organize internal representations in such a way as to minimize the processing cost of future language input“ (Introduction, first paragraph). We have clarified this in the first paragraph of the Introduction.

    • Why did the authors consider that the evoked response is the proper signal to assess as opposed to oscillatory (or non phase-locked) activity?

    The primary reason for our choice of dependent measure is the prior research we based our design on, showing that the linguistic entropy and surprisal effects are measurable in phase-locked responses (Brodbeck et al., 2018; Donhauser and Baillet, 2020). We have made this more explicit in part of the Introduction where we introduce our approach (“To achieve this, we analyzed …”).

    As to oscillatory dependent measures, we consider them an interesting but parallel research question. We are not aware of specific corresponding effects in non-phase locked activity. Accordingly, analyzing oscillatory responses without a clear prior hypothesis would require additional decisions, such as which bands to analyze, which would entail issues of multiple comparison. An additional caveat is that the temporal resolution of oscillatory activity is often lower than that of phase-locked activity, which might potentially make it harder to distinguish responses based on their latency as we did here, to test whether the latency of different context models differ.

    • Parallel processing with different levels of context (hence temporal granularities) sounds compatible with temporal multiplexing of speech representation proposed by Giraud & Poeppel (2012) or do the authors consider it a separate issue?

    We consider our investigation orthogonal to the model discussed by G&P (2012). G&P’s model is about the organization of acoustic information at different time-scales, and does not discuss the influence of linguistic constructs at the word level and above. On the other hand, the information-theoretic models that form the basis of our analysis track the linguistic information that can be extracted from the acoustic signal. The temporal scales invoked by G&P’s model are also different from the ones used here, defined based on acoustic vs. linguistic units. Thus, the kind of neural entrainment as a mechanism for speech processing hypothesized by G&P is fully compatible with our account, but not at all required by it.

    Methods:

    • Figure 2: please spell out TRFs and clarify the measured response

    We have done both in the Figure legend.

    • The sample size (N=12) is very low in today standards but the statistical granularity is that of the full MEG recording. Can a power estimate be provided or clear justification of reliability of statistical measures be described.

    We appreciate and share the reviewers’ concern with statistical power and have made several modifications to better explain and rationalize our choices.

    First, to contextualize our study: The sample size is similar to the most comparable published study, which had 11 participants (Donhauser and Baillet, 2020). Our own previous study (Brodbeck et al., 2018) had more participants (28) but only a fraction of the data per subject (8 minutes of speech in quiet, vs. 47 minutes in the present dataset). We added this consideration to the Methods/Participants section.

    We also added a table with effect-sizes for all the main predictors to make that information more accessible (Table 1). This suggests that the most relevant effects have Cohen’s d > 1. With our sample size 12, we had 94% power to detect an effect with d = 1, and 99% power to detect an effect with d = 1.2. This post-hoc analysis suggests that our sample was adequately powered for the intended purpose.

    Finally, all crucial model comparisons are accompanied by swarm-plots that show each subject as a separate dot, thus showing that these comparisons are highly reproducible across participants (note that there rarely are participants with model difference below 0, indicating that the effects are all seen in most subjects).

    • The inclusion of a left-handed in speech studies in unusual, please comment on any difference (or lack thereof) for this participant and notably the lateralization tests.

    We agree that this warrants further comment, in particular given our lateralization findings. We have made several changes to address this concern. At the same time we hope that the reviewers agree with us that, with proper care, inclusion of a left-handed participants is desirable (Willems et al., 2014), and indeed is becoming more mainstream, at least for studies of naturalistic language processing (e.g. Shain et al., 2020). First, we now draw attention to the presence of a left-hander where we introduce our sample (first paragraph of the Results section). Second, we repeated all tests of lateralization while excluding the left-hander. Because this did not change any of the conclusions, we decided to keep reporting results for the whole sample. However, third, we now mark the left-handed participant in all plots that include single-subject estimates and corresponding source data files. Overall, the left-hander indeed shows stronger right-lateralization than the average participant, but is by no means an outlier.

    • The authors state that eyes were kept open or close. This is again unusual as we know that eye closure affects not only the degree of concentration/fatigue but directly impact alpha activity (which in turn affects evoked responses (1-40 Hz then 20 Hz) that are being estimated here). Please explain.

    Previous comparable studies variably asked subjects to keep their eyes closed (e.g. Brodbeck et al., 2018) or open (e.g. Donhauser and Baillet, 2020). Both modes have advantages and disadvantages, none of which are prohibitive for our target analysis (ocular artifacts were removed with ICA and oscillatory alpha activity should, on average, be orthogonal to time-locked responses to the variables of interest). Importantly however, both modes have subjective disadvantages when enforced: deliberately keeping eyes open can lead to eye strain and excessive blinking, whereas closing eyes can exacerbate sleepiness. For this reason we wanted to allow subjects to self-regulate to optimize the performance on the aspects of the task that mattered – processing meaning in the audiobook. We extended the corresponding Methods section to explain this.

    • It would be helpful to clarify the final temporal granularity of analysis. The TRFs time courses are said to be resampled to 1kHz (p22) but MEG time courses are said to be resampled at 100 Hz (p18).

    Thanks for noting this. We clarified in the TRF time-course section: the deconvolution analysis was performed at 100 Hz, and TRFs were then resampled to 1 kHz for visualization and fine-grained peak analysis.

    • The % of variance explained by acoustic attributes is 15 to 20 folds larger than the that explained by the linguistic models of interest. Can a SNR measure be evaluated on such observations?

    We appreciate this concern, which is indeed reasonable. In order to better clarify this issue we have added a new paragraph, right after Table 1. In brief, since the statistical analysis looks for generality across subjects, the raw % explained values do not directly speak to the SNR or effect size. Rather, the SNR concerns how much variability is in this value across subjects. The individual subject values in Figure 3-B, and effect sizes now reported in Table 1, show that even though the % variability that is uniquely attributable to information-theoretic quantities is small, it is consistently larger than 0 across subjects.

    Results and Figures:

    • The current figures do not give enough credit to the depth of analysis being presented. I understand that this typical for such mTRFs approach but given the level of abstraction being evaluated in the linguistic inputs, it may be helpful to show an exemple of what to expect for low vs. high surprisal for instance from the modeling perspective and over time. For instance, could Figure 1 already illustrate disctinct predictions of the the local vs. global models?

    Thank you for pointing out this gap. We have added two figures to make the results more approachable:

    First, in Figure 3 we now show an example stimulus excerpt with all predictors we used. This makes the complete set of predictors quickly apparent without readers having to collect the information from the different places in the manuscript. It also gives a better sense of the detail that is modeled in the different stimulus representations. Second, we added Figure 6 to show example predictions from the different context models, and explain better how the mTRF approach can decompose brain responses into components related to different stimulus properties.

    • Why are visual cortices highlighted in figures?

    Those were darkened to indicate that they are excluded from the analysis. We have added a corresponding explanation to the legend of Figure 3.

    • Figure 2 Fig 2A and B: can the authors quantitatively illustrate "5-gram generally leads to a reduction of word surprisal but its magnitude varies substantially between words" by simply showing the mean surprisal and its variance?

    Added to the Figure legend.

    Fig 2C: please explain the term "partial response"; please indicate for non M/EEGers what the arrow symbolizes.

    Added to the Figure legend.

    • Figure 3:

    p8: the authors state controlling for the "acoustic features" but do not clearly describe how in the methods and this control comes as a (positive) surprise but still a bit unexpected at first read. Perhaps include the two acoustic features in Fig2C and provide a short couple sentences on how these could impair or confound mTRF performance.

    We thank you for pointing out this lack of explanation. We have added a description of all the control predictors to the end of the Introduction, right after explaining the predictors of main interest. We have also added Figure 3 to give an example and make the nature of all the controls explicit.

    Have the same analysis been conducted on a control region a priori not implicated in linguistic processing? This would be helpful to comfort the current results.

    The analysis has been performed on the whole brain (excluding the insula and the occipital lobe). Figure 4 (previously Figure 3) shows that generally only regions in the temporal lobe exhibit significant contributions from the linguistic models (allowing for some dispersion associated with MEG source localization). Although this is not shown in the figure, regions further away from the significant region generally exhibit a decrease in prediction accuracy from adding linguistic predictors, as is commonly seen with cross-validation when models overfit to irrelevant predictors.

    Fig 3B-C-E: please clearly indicate what single dot or "individual value" represents. Is this average over the full ROI? Was the orientation fixed? Can some measure of variability be provided?

    Explanation of individual dots added to Figure 4-B legend (formerly 3-B). Fixed orientation added to the methods summary in the Figure 2-C legend. To provide more detailed statistics including a measure of variability we added Table 1.

    Fig3E: make bigger / more readable (too many colors: significance bars could be black)

    We have increased the size and made the significance bars black.

    • Figure 4: having to go to the next Fig (Fig5) to understand the time windows is inconvenient and difficult to follow. Please, find a work around or combine the two figures. From which ROI are the times series extracted from?

    We have combined the two figures to facilitate comparison, and have added a brief explanation of the ROI to the figure legend.

    Reviewer #3 (Public Review):

    This manuscript presents a neurophysiological investigation of the hierarchical nature of prediction in natural speech comprehension. The authors record MEG data to speech from an audiobook. And they model that MEG using a number of different speech representations in order to explore how context affects the encoding of that speech. In particular, they are interested in testing how the response to phoneme is affected by context at three different levels: sublexical how the probability of an upcoming phoneme is constrained by previous phonemes; word - how the probability of an upcoming phoneme is affected by its being part of an individual word; sentence - how the probability of an upcoming phoneme is affected by the longer-range context of the speech content. Moreover, the authors are interested in exploring how effects at these different levels might contribute - independently - to explaining the MEG data. In doing so, they argue for parallel contributions to predictive processing from both long-range context and more local context. The authors discuss how this has important implications for how we understand the computational principles underlying natural speech perception, and how it can potentially explain a number of interesting phenomena from the literature.

    Overall, I thought this was a very well written and very interesting manuscript. I thought the authors did a really superb job, in general, of describing their questions against the previous literature, and of discussing their results in the context of that literature. I also thought, in general, that the methods and results were well explained. I have a few comments and queries for the authors too, however, most of which are relatively minor.

    Main comments:

    1. One concerns I had was about the fact that context effects are estimated using 5-gram, models. I appreciate the computational cost involved in modeling more context. But, at the same time, I worry a little that examining the previous 4 phonemes or (especially) words is simply not enough to capture longer-term dependencies that surely exist. The reason I am concerned about this is that the sentence level context you are incorporating here is surely suboptimal. As such, could it be the case that the more local models are performing as well as they are simply because the sentence level context has not been modeled as well as it should be? I appreciate the temporal and spatial patterns appear to differ for the sentence level relative to the other two, so that is good support for the idea that they are genuinely capturing different things. However, I think some discussion of the potential shortcomings of only including 4 tokens of context is worth adding. Particularly when you make strong claims like that on lines 252.

    We strongly agree with the reviewer that the 5-gram model is not the ultimate model of human context representations. We have added a section to acknowledge this (Limitations of the sentence context model).

    While we see much potential for future work to investigate context processing by using more advanced language models, a preliminary investigation suggests that it might not be trivial. We compared the ability of a pre-trained LSTM (Gulordava et al., 2018) to predict the brain response to words in our dataset with that of the 5-gram model. The LSTM performed substantially worse than the 5-gram model. An important difference between the two models is that our 5-gram model was trained on the Corpus of Contemporary American English (COCA), whereas the LSTM was trained on Wikipedia. COCA provides a large and highly realistic sample of English, whereas the language in Wikipedia might be a more idiosyncratic subsample. Thus, the LSTM might be worse just because it has been trained on a less representative sample of English. As an initial step we thus ought to train the LSTM on the superior COCA database, but this simple step alone would already be associated with a substantial computational cost, given the size of COCA at more than a billion words (we estimated 3 weeks on 32 GPUs in a computing cluster). Furthermore, while we acknowledge the limitations of the 5-gram model, we consider it very unlikely that its limitations are the reason that the more local models are performing well. In general, as more context is considered, the model’s predictions should become more different from the local model, i.e., a more sophisticated model should be less correlated with the local models, and should thus allow the local models to perform even better.

    1. I found myself confused about what exactly was being modeled on my first reading of pages 4 through 7. I realized then that all of the models are based on estimating a probability distribution based on phonemes (stated on line 167). I think why I found it so confusing was that the previous section talked about using word forms and phonemes as units of representation (lines 118-119; Fig 2A), and I failed to grasp that, in fact, you were not going to be modeling surprisal or entropy at the word level, but always at the phoneme level (just with different context). Anyway, I just thought I would flag that as other readers might also find themselves thinking in one direction as they read pages 4 and 5, only to find themselves confused further down.

    Thank you for pointing out this ambiguity; we now make it explicit that “all our predictors reflect information-theoretic quantities at the rate of phonemes” early on in the Expressing the use of context through information theory section.

    1. I also thought some the formal explanations of surprisal and entropy on lines 610-617 would be valuable if added to the first paragraph on page 6, which, at the moment, is really quite abstract and not as digestible as it could be, particularly for entropy.

    We appreciate that this needs to be much clearer for readers with different backgrounds. As suggested, we have added the formal definition to the Introduction, and we now also point readers explicitly to the Methods subsection that explains these definitions in more detail.

    1. I like the analysis examining the possibility of tradeoffs between context models. I wonder might such tradeoffs exist as conversational environments vary - if the complexity of the speech varies and/or listening conditions vary might there be more reliance on local vs global context then. If that seems plausible, then it might be worth adding a caveat that you found no evidence for any tradeoff, but that your experiment was pretty homogenous in terms of speech content.

    Thank you for this suggestion. We added this idea to the Discussion in the Implications for speech processing section.

  2. Evaluation Summary:

    Brodbeck and colleagues make a strong contribution to the field of neurolinguistics by asking whether speech comprehension uses local (e.g., sublexical) or global (e.g., sentences) contextual probabilities. To tackle this, they recorded participants with magnetoencephalography while they listened to a story. The authors assessed which of three possible speech models best explained brain activity using state-of-the-art analyses and information-theoretic measures. The authors report strong and valuable evidence for both local and global contextual analyses supporting the coexistence of both hierarchical and parallel speech processing in the human brain.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #2 agreed to share their name with the authors.)

  3. Reviewer #1 (Public Review):

    The key question addressed of this MEG study is whether speech is represented singly or multiplexed in the human brain in the linguistic hierarchy. The authors used state-of-the-art analyses (multivariate Temporal Response Functions) and probablilistic information-theoretic measures (entropy, surprisal) to test distinct contextual speech processing models at three hierarchical levels. The authors report evidence for the coexistence of local and global predictive speech processing in the linguistic hierarchy.

    The work uses time resolved neuroimaging with state-of-the-art analyses and cognitive (here, linguistic) modeling. The study is very well conducted and draws from very different fields of knowledge in convincing ways. I see one limitation of the current study in that the authors focused on phase-locked responses, and I hope future work could extend to induced activity.

    Overall, the flow in the MS could be streamlined. Some smoothing in the introduction would be helpful to extract the main key messages you wish to convey.

    For instance, in the abstract:

    – Can you explain the two views in a simpler way in the abstract and to a non-linguistic audience? Do you mean to say that classic psycholinguistic models tend to follow a strict hierarchically integration (analysis only) but an alternative model is hierarchically inferential (analysis by synthesis)?

    – Indicate early on in abstract or intro where the audience is being led with a concise message on how you address the main question. For instance:

    To contrast our working hypotheses A and B, we used a novel information-theoretic modeling approach and associated measures (entropy, surprisal), which make clear predictions on the latency of brain activity in responses to speech at three hierarchal contextual levels (sublexical, word and sentence).

    – Why did the authors consider that the evoked response is the proper signal to assess as opposed to oscillatory (or non phase-locked) activity?

    – Parallel processing with different levels of context (hence temporal granularities) sounds compatible with temporal multiplexing of speech representation proposed by Giraud & Poeppel (2012) or do the authors consider it a separate issue?

    Methods:

    – Figure 2: please spell out TRFs and clarify the measured response

    – The sample size (N=12) is very low in today standards but the statistical granularity is that of the full MEG recording. Can a power estimate be provided or clear justification of reliability of statistical measures be described.

    – The inclusion of a left-handed in speech studies in unusual, please comment on any difference (or lack thereof) for this participant and notably the lateralization tests.

    – The authors state that eyes were kept open or close. This is again unusual as we know that eye closure affects not only the degree of concentration/fatigue but directly impact alpha activity (which in turn affects evoked responses (1-40 Hz then 20 Hz) that are being estimated here). Please explain.

    – It would be helpful to clarify the final temporal granularity of analysis. The TRFs time courses are said to be resampled to 1kHz (p22) but MEG time courses are said to be resampled at 100 Hz (p18).

    – The % of variance explained by acoustic attributes is 15 to 20 folds larger than the that explained by the linguistic models of interest. Can a SNR measure be evaluated on such observations?

    Results and Figures:

    – The current figures do not give enough credit to the depth of analysis being presented. I understand that this typical for such mTRFs approach but given the level of abstraction being evaluated in the linguistic inputs, it may be helpful to show an exemple of what to expect for low vs. high surprisal for instance from the modeling perspective and over time.

    For instance, could Figure 1 already illustrate disctinct predictions of the the local vs. global models?

    – Why are visual cortices highlighted in figures?

    – Figure 2:

    Fig 2A and B: can the authors quantitatively illustrate "5-gram generally leads to a reduction of word surprisal but its magnitude varies substantially between words" by simply showing the mean surprisal and its variance?

    Fig 2C: please explain the term "partial response"; please indicate for non M/EEGers what the arrow symbolizes.

    – Figure 3:

    p8: the authors state controlling for the "acoustic features" but do not clearly describe how in the methods and this control comes as a (positive) surprise but still a bit unexpected at first read. Perhaps include the two acoustic features in Fig2C and provide a short couple sentences on how these could impair or confound mTRF performance.

    Have the same analysis been conducted on a control region a priori not implicated in linguistic processing? This would be helpful to comfort the current results.

    Fig 3B-C-E: please clearly indicate what single dot or "individual value" represents. Is this average over the full ROI? Was the orientation fixed? Can some measure of variability be provided?

    Fig3E: make bigger / more readable (too many colors: significance bars could be black)

    – Figure 4: having to go to the next Fig (Fig5) to understand the time windows is inconvenient and difficult to follow. Please, find a work around or combine the two figures. From which ROI are the times series extracted from?

  4. Reviewer #2 (Public Review):

    This manuscripts describes an MEG study where N=12 English-speaking participants listened to about 45 minutes of an audiobook story. The key question is what sorts of information guides predictions during this naturalistic comprehension: local information (e.g. phoneme to phoneme transitions) or global information (e.g. sentence context constrains phoneme expectations etc.) These theories were tested by constructing a set of language models that varied the context used to compute phoneme and word probabilities; these probabilities were quantified in terms of surprisal and entropy and those values were fit against source-localized MEG data using standard techniques (mTRFs). Results showed independent contributions of both more local and more local contexts on superior temporal sources.

    I really like this manuscript and I think it will make a fine contribution to the literature. A few things to highlight: It is very clearly written. The introduction does a really nice job of integrating current state-of-the-art thinking with classic key psycholinguistic debates; the theoretical stakes are very clear. I also appreciated the relatively cautious aspects of interpretation such as the analysis looking at trade-offs between global and local contexts.

  5. Reviewer #3 (Public Review):

    This manuscript presents a neurophysiological investigation of the hierarchical nature of prediction in natural speech comprehension. The authors record MEG data to speech from an audiobook. And they model that MEG using a number of different speech representations in order to explore how context affects the encoding of that speech. In particular, they are interested in testing how the response to phoneme is affected by context at three different levels: sublexical – how the probability of an upcoming phoneme is constrained by previous phonemes; word – how the probability of an upcoming phoneme is affected by its being part of an individual word; sentence – how the probability of an upcoming phoneme is affected by the longer-range context of the speech content. Moreover, the authors are interested in exploring how effects at these different levels might contribute - independently - to explaining the MEG data. In doing so, they argue for parallel contributions to predictive processing from both long-range context and more local context. The authors discuss how this has important implications for how we understand the computational principles underlying natural speech perception, and how it can potentially explain a number of interesting phenomena from the literature.

    Overall, I thought this was a very well written and very interesting manuscript. I thought the authors did a really superb job, in general, of describing their questions against the previous literature, and of discussing their results in the context of that literature. I also thought, in general, that the methods and results were well explained. I have a few comments and queries for the authors too, however, most of which are relatively minor.

    Main comments:

    1. One concerns I had was about the fact that context effects are estimated using 5-gram, models. I appreciate the computational cost involved in modeling more context. But, at the same time, I worry a little that examining the previous 4 phonemes or (especially) words is simply not enough to capture longer-term dependencies that surely exist. The reason I am concerned about this is that the sentence level context you are incorporating here is surely suboptimal. As such, could it be the case that the more local models are performing as well as they are simply because the sentence level context has not been modeled as well as it should be? I appreciate the temporal and spatial patterns appear to differ for the sentence level relative to the other two, so that is good support for the idea that they are genuinely capturing different things. However, I think some discussion of the potential shortcomings of only including 4 tokens of context is worth adding. Particularly when you make strong claims like that on lines 252.

    2. I found myself confused about what exactly was being modeled on my first reading of pages 4 through 7. I realized then that all of the models are based on estimating a probability distribution based on phonemes (stated on line 167). I think why I found it so confusing was that the previous section talked about using word forms and phonemes as units of representation (lines 118-119; Fig 2A), and I failed to grasp that, in fact, you were not going to be modeling surprisal or entropy at the word level, but always at the phoneme level (just with different context). Anyway, I just thought I would flag that as other readers might also find themselves thinking in one direction as they read pages 4 and 5, only to find themselves confused further down.

    3. I also thought some the formal explanations of surprisal and entropy on lines 610-617 would be valuable if added to the first paragraph on page 6, which, at the moment, is really quite abstract and not as digestible as it could be, particularly for entropy.

    4. I like the analysis examining the possibility of tradeoffs between context models. I wonder might such tradeoffs exist as conversational environments vary - if the complexity of the speech varies and/or listening conditions vary - might there be more reliance on local vs global context then. If that seems plausible, then it might be worth adding a caveat that you found no evidence for any tradeoff, but that your experiment was pretty homogenous in terms of speech content.