The normalization model predicts responses in the human visual cortex during object-based attention

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    The authors state that there is scant experimental evidence of divisive normalization of neural responses in the human brain. They used fMRI BOLD response to high-level stimuli to explore normalization in V1, object-selective (LO and pFs) and category-selective regions (EBA and PPA) as well effects of attention on cortical responses. Specifically, the authors first test the degree to which BOLD responses to body parts and houses exhibit responses predicted by a non-linear normalization model, compared to two linear models (weighted sum and weighted average). They find that responses, when considering responses to one vs two stimuli, are best fit with the normalization model. They then suggest that object-based attention effects can be better accounted for by a normalization model of attention, compared to attention variants of the aforementioned models. The paper could potentially be an important contribution to the fields of perceptual and cognitive neuroscience, but the conclusions are not sufficiently supported by the data at this stage. Several theoretical and methodological concerns limit the conclusions of this study.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Divisive normalization of the neural responses by the activity of the neighboring neurons has been proposed as a fundamental operation in the nervous system based on its success in predicting neural responses recorded in primate electrophysiology studies. Nevertheless, experimental evidence for the existence of this operation in the human brain is still scant. Here, using functional MRI, we examined the role of normalization across the visual hierarchy in the human visual cortex. Using stimuli form the two categories of human bodies and houses, we presented objects in isolation or in clutter and asked participants to attend or ignore the stimuli. Focusing on the primary visual area V1, the object-selective regions LO and pFs, the body-selective region EBA, and the scene-selective region PPA, we first modeled single-voxel responses using a weighted sum, a weighted average, and a normalization model and demonstrated that although the weighted sum and weighted average models also made acceptable predictions in some conditions, the response to multiple stimuli could generally be better described by a model that takes normalization into account. We then determined the observed effects of attention on cortical responses and demonstrated that these effects were predicted by the normalization model, but not by the weighted sum or the weighted average models. Our results thus provide evidence that the normalization model can predict responses to objects across shifts of visual attention, suggesting the role of normalization as a fundamental operation in the human brain.

Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    Doostani et al. present work in which they use fMRI to explore the role of normalization in V1, LO, PFs, EBA, and PPA. The goal of the manuscript is to provide experimental evidence of divisive normalization of neural responses in the human brain. The manuscript is well written and clear in its intentions; however, it is not comprehensive and limited in its interpretation. The manuscript is limited to two simple figures that support its concussions. There is no report of behavior, so there is no way to know whether participants followed instructions. This is important as the study focuses on object-based attention and the analysis depends on the task manipulation. The manuscript does not show any clear progression towards the conclusions and this makes it difficult to assess its scientific quality and the claims that it makes.

    Strengths:

    The intentions of the paper are clear and the design of the experiment itself is simple to follow. The paper presents some evidence for normalization in V1, LO, PFs, EBA, and PPA. The presented study has laid the foundation for a piece of work that could have importance for the field once it is fleshed out.

    Weakness:

    The paper claims that it provides compelling evidence for normalization in the human brain. Very broadly, the presented data support this conclusion; for the most part, the normalization model is better than the weighted sum model and a weighted average model. However, the paper is limited in how it works its way up to this conclusion. There is no interpretation of how the data should look based on expectations, just how it does look, and how/why the normalization model is most similar to the data. The paper shows a bias in focusing on visualization of the 'best' data/areas that support the conclusions whereas the data that are not as clear are minimized, yet the conclusions seem to lump all the areas in together and any nuanced differences are not recognized. It is surprising that the manuscript does not present illustrative examples of BOLD series from voxel responses across conditions given that it is stated that it is modeling responses to single voxels; these responses need to be provided for the readers to get some sense of data quality. There are also issues regarding the statistics; the statistics in the paper are not explicitly stated, and from what information is provided (multiple t-tests?), they seem to be incorrect. Last, but not least, there is no report of behavior, so it is not possible to assess the success of the attentional manipulation.

    We appreciate the reviewer’s feedback on providing more information so that the scientific quality of our work can be assessed. We have now added a new figure including BOLD responses in different conditions, as well as how we expected the data to look and the interpretations. To provide extra evidence for data quality and reliability, we have included BOLD responses of different conditions for odd and even runs separately in the supplementary information.

    In order to avoid any bias in presentation, we have now visualized the results from all areas with the same size and in a more logical order. However, we have also modified all results to include only those voxels in each ROI that were active for the stimuli presented in the main task based on the comment of one of the reviewers. According to the current results, there is no difference in the efficiency of the normalization model in different regions, which we have reported in the results section.

    Regarding the statistics, we have corrected the problem. We have performed ANOVA tests, have corrected all results for multiple comparisons, and have added a statistics subsection in the methods section to explicitly explain the statistics.

    Finally, we have added the report of the reaction time and accuracy in the results section and the supplementary information. As stated, average performance was above 86% in all conditions, confirming that the participants correctly followed the instructions and that the attentional manipulation was successful.

    We hope that the reviewer would find the manuscript improved and that the new analyses, figures, and discussions would address the reviewer’s concerns.

    Reviewer #2 (Public Review):

    My main concern is in regards to the interpretation of these results has to do with the sparseness of data available to fit with the models. The authors pit two linear models against a nonlinear (normalization) model. The predictions for weighted average and summed models are both linear models doomed to poorly match the fMRI data, particularly in contrast to the nonlinear model. So, while I appreciate the verification that responses to multiple stimuli don't add up or average each other, the model comparisons seem less interesting in this light. This is particularly salient of an issue because the model testing endeavor seems rather unconstrained. A 'true' test of the model would likely need a whole range of contrasts tested for one (or both) of the stimuli, Otherwise, as it stands we simply have a parameter (sigma) that instantly gives more wiggle room than the other models. It would be fairer to pit this normalization model against other nonlinear models. Indeed, this has been already been done in previous work by Kendrick Kay, Jon Winawer and Serge Dumoulin's groups. So far, may concern above has only been in regards to the "unattended" data. But the same issue of course extends to the attended conditions. I think the authors need to either acknowledge the limits of this approach to testing the model or introduce some other frameworks.

    We thank the reviewer for their feedback. We have taken two approaches to answer this concern. First, we have included simulations of neural population responses to attended and unattended stimuli. The results demonstrate that with our cross-validation approach, the normalization model is only a better fit if the computation performed at the neural level for multiple-stimulus responses is divisive normalization. Otherwise, the weighted sum or the weighted average models are better fits to the population response when the neurons respectively sum or average responses. These results suggest that the normalization model provides a better fit to the data because the underlying computation performed by the neurons is divisive normalization, not because of the model’s non-linearity.

    In a second approach, we tested a nonlinear model, which was a generalization of the weighted sum and the weighted average models with an extra saturation parameter (with even more parameters than the normalization model). The results demonstrated that this model was also a worse fit than the normalization model.

    Regarding the reviewer’s comment on testing for a range of contrasts, as we have emphasized now in the discussion, here, we have used single-, multiple-, attended- and unattended-stimulus conditions to explore the change in response and how the normalization model accounts for the observed changes in different conditions. While testing for a range of contrasts would also be interesting, it would need a multi-session fMRI experiment to test for a range of contrasts with isolated and paired stimulus conditions in the presence and absence of attention. Moreover, the role of contrast in normalization has been investigated in previous studies, and here we added to the existing literature by exploring responses to multiple objects, and investigating the role of attention. Finally, since the design of our experiment includes presenting superimposed stimuli, the range of contrasts we can use is limited. Low-contrast superimposed stimuli cannot be easily distinguished, and high-contrast stimuli block each other.

    We hope that the reviewer would find the manuscript improved and that the new models, simulations, analyses, and discussions would address the reviewer’s concerns.

    Reviewer #3 (Public Review):

    In this paper, the authors model brain responses for visual objects and the effect of attention on these brain responses. The authors compare three models that have been studied in the literature to account for the effect of attention on brain responses to multiple stimuli: a normalization model, a weighted average model, and a weighted sum model.

    The authors presented human volunteers with images of houses and bodies, presented in isolation or together, and measured fMRI brain activity. The authors fit the fMRI data to the predictions of these three models, and argue that the normalization model best accounts for the data.

    The strengths of this study include a relatively large number of participants (N=19), and data collected in a variety of different visual brain regions. The blocked design paradigm and the large number of fMRI runs enhance the quality of the dataset.

    Regarding the interpretation of the findings, there are a few points that should be considered: 1) The different models that are being studied have different numbers of free parameters. The normalization model has the highest number of free parameters, and it turns out to fit the data the best. Thus, the main finding could be due to the larger number of parameters in the model. The more parameters a model has, the higher "capacity" it has to potentially fit a dataset. 2) In the abstract, the authors claim that the normalization model best fits the data. However, on closer inspection, this does not appear to be the case systematically in all conditions, but rather more so in the attended conditions. In some of the other conditions, the weighted average model also appears to provide a reasonable fit, suggesting that the normalization model may be particularly relevant to modeling the effects of attention. 3) In the primary results, the data are collapsed across five different conditions (isolated/attended for preferred and null stimuli), making it difficult to determine how each model fares in each condition. It would be helpful to provide data separately for the different conditions.

    We thank the reviewer for their feedback.

    Regarding the reviewer’s concern about the number of free parameters, we have introduced a simulation approach, demonstrating that with our cross-validation approach, a model with a higher number of parameters is not a good fit when the underlying neural computation does not match the computation performed by the model. Moreover, we have now included another nonlinear model with 5 parameters that performs worse than the normalization model. Besides, we have used the AIC measure in addition to cross-validation for model comparison, and the AIC measure confirms the previous results.

    Regarding the difference in the efficiency of the normalization model across conditions, after selecting the voxels that were active during the main task in each ROI (done according to the suggestion of one of the reviewers to compensate for the difference in size of localizer and task stimuli), we observed that the normalization model was a better fit for both attended and unattended conditions. However, since the weighted average model results were also close to the data in unattended conditions, we have discussed the unattended condition separately and have discussed the relevance of our results to previous reports of multiple-stimulus responses in the absence of attention.

    Finally, concerning model comparison for different conditions, we have calculated the models’ goodness of fit across conditions for each voxel. The reason for calculating the goodness of fit in this manner was to evaluate model fits based on their ability in predicting response changes with the addition of a second stimulus and with the shifts of attention. Since correlation is blind to a systematic error in prediction for all voxels in a condition, calculating the goodness of fit across voxels would lead to misinterpretation. We have now included a figure in the supplementary information illustrating the method we used for calculating the goodness of fit.

    We hope that the reviewer would find the manuscript improved and that the new analyses, simulations, figures, and discussions would address the reviewer’s concerns.

  2. eLife assessment

    The authors state that there is scant experimental evidence of divisive normalization of neural responses in the human brain. They used fMRI BOLD response to high-level stimuli to explore normalization in V1, object-selective (LO and pFs) and category-selective regions (EBA and PPA) as well effects of attention on cortical responses. Specifically, the authors first test the degree to which BOLD responses to body parts and houses exhibit responses predicted by a non-linear normalization model, compared to two linear models (weighted sum and weighted average). They find that responses, when considering responses to one vs two stimuli, are best fit with the normalization model. They then suggest that object-based attention effects can be better accounted for by a normalization model of attention, compared to attention variants of the aforementioned models. The paper could potentially be an important contribution to the fields of perceptual and cognitive neuroscience, but the conclusions are not sufficiently supported by the data at this stage. Several theoretical and methodological concerns limit the conclusions of this study.

  3. Reviewer #1 (Public Review):

    Doostani et al. present work in which they use fMRI to explore the role of normalization in V1, LO, PFs, EBA, and PPA. The goal of the manuscript is to provide experimental evidence of divisive normalization of neural responses in the human brain. The manuscript is well written and clear in its intentions; however, it is not comprehensive and limited in its interpretation. The manuscript is limited to two simple figures that support its concussions. There is no report of behavior, so there is no way to know whether participants followed instructions. This is important as the study focuses on object-based attention and the analysis depends on the task manipulation. The manuscript does not show any clear progression towards the conclusions and this makes it difficult to assess its scientific quality and the claims that it makes.

    Strengths:
    The intentions of the paper are clear and the design of the experiment itself is simple to follow. The paper presents some evidence for normalization in V1, LO, PFs, EBA, and PPA. The presented study has laid the foundation for a piece of work that could have importance for the field once it is fleshed out.

    Weakness:
    The paper claims that it provides compelling evidence for normalization in the human brain. Very broadly, the presented data support this conclusion; for the most part, the normalization model is better than the weighted sum model and a weighted average model. However, the paper is limited in how it works its way up to this conclusion. There is no interpretation of how the data should look based on expectations, just how it does look, and how/why the normalization model is most similar to the data. The paper shows a bias in focusing on visualization of the 'best' data/areas that support the conclusions whereas the data that are not as clear are minimized, yet the conclusions seem to lump all the areas in together and any nuanced differences are not recognized. It is surprising that the manuscript does not present illustrative examples of BOLD series from voxel responses across conditions given that it is stated that that it is modeling responses to single voxels; these responses need to be provided for the readers to get some sense of data quality. There are also issues regarding the statistics; the statistics in the paper are not explicitly stated, and from what information is provided (multiple t-tests?), they seem to be incorrect. Last, but not least, there is no report of behavior, so it is not possible to assess the success of the attentional manipulation.

  4. Reviewer #2 (Public Review):

    My main concern is in regards to the interpretation of these results has to do with the sparseness of data available to fit with the models. The authors pit two linear models against a nonlinear (normalization) model. The predictions for weighted average and summed models are both linear models doomed to poorly match the fMRI data, particularly in contrast to the nonlinear model. So, while I appreciate the verification that responses to multiple stimuli don't add up or average each other, the model comparisons seem less interesting in this light. This is particularly salient of an issue because the model testing endeavor seems rather unconstrained. A 'true' test of the model would likely need a whole range of contrasts tested for one (or both) of the stimuli, Otherwise, as it stands we simply have a parameter (sigma) that instantly gives more wiggle room than the other models. It would be fairer to pit this normalization model against other nonlinear models. Indeed, this has been already been done in previous work by Kendrick Kay, Jon Winawer and Serge Dumoulin's groups. So far, may concern above has only been in regards to the "unattended" data. But the same issue of course extends to the attended conditions. I think the authors need to either acknowledge the limits of this approach to testing the model or introduce some other frameworks.

  5. Reviewer #3 (Public Review):

    In this paper, the authors model brain responses for visual objects and the effect of attention on these brain responses. The authors compare three models that have been studied in the literature to account for the effect of attention on brain responses to multiple stimuli: a normalization model, a weighted average model, and a weighted sum model.

    The authors presented human volunteers with images of houses and bodies, presented in isolation or together, and measured fMRI brain activity. The authors fit the fMRI data to the predictions of these three models, and argue that the normalization model best accounts for the data.

    The strengths of this study include a relatively large number of participants (N=19), and data collected in a variety of different visual brain regions. The blocked design paradigm and the large number of fMRI runs enhance the quality of the dataset.

    Regarding the interpretation of the findings, there are a few points that should be considered: 1) The different models that are being studied have different numbers of free parameters. The normalization model has the highest number of free parameters, and it turns out to fit the data the best. Thus, the main finding could be due to the larger number of parameters in the model. The more parameters a model has, the higher "capacity" it has to potentially fit a dataset. 2) In the abstract, the authors claim that the normalization model best fits the data. However, on closer inspection, this does not appear to be the case systematically in all conditions, but rather more so in the attended conditions. In some of the other conditions, the weighted average model also appears to provide a reasonable fit, suggesting that the normalization model may be particularly relevant to modeling the effects of attention. 3) In the primary results, the data are collapsed across five different conditions (isolated/attended for preferred and null stimuli), making it difficult to determine how each model fares in each condition. It would be helpful to provide data separately for the different conditions.