Spatial frequency adaptation modulates population receptive field sizes

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife Assessment

    This study presents an important finding regarding a significant, understudied question: How does adaptation affect spatial frequency processing in the human visual cortex? Using both psychophysics and neuroimaging the authors conclude that adaptation induces changes in perceived spatial frequency and population receptive field size (pRF) size, depending on the adaptation state. Specifically, adapting to a low spatial frequency increases perceived spatial frequency and results in smaller pRFs, whereas adapting to a high spatial frequency decreases perceived spatial frequency and leads to broader pRFs. These results offer an explanation for previous seemingly conflicting findings regarding the effects of adaptation on size illusions and the evidence is solid; however, including a clear, direct comparison between pRF sizes in the high-adapted and low-adapted conditions would further strengthen the argument.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

The spatial tuning of neuronal populations in the early visual cortical regions is related to the spatial frequency (SF) selectivity of neurons. However, there has been no direct investigation into how this relationship is reflected in population receptive field (pRF) sizes despite the common application of pRF mapping in visual neuroscience. We hypothesised that adaptation to high/low SF would decrease the sensitivity of neurons with respectively small/large receptive field sizes, resulting in a change in pRF sizes as measured by functional magnetic resonance imaging (fMRI). To test this hypothesis, we first quantified the SF aftereffect using a psychophysical paradigm where observers made SF judgments following adaptation to high/low SF noise patterns. We then incorporated the same adaptation technique into a standard pRF mapping procedure, to investigate the spatial tuning of the early visual cortex following SF adaptation. Results showed that adaptation to a low/high SF resulted in smaller/larger pRFs respectively, as hypothesised. Our results provide the most direct evidence to date that the spatial tuning of the visual cortex, as measured by pRF mapping, is related to the SF selectivity of visual neural populations. This has implications for various domains of visual processing, including size perception and visual acuity.

Article activity feed

  1. eLife Assessment

    This study presents an important finding regarding a significant, understudied question: How does adaptation affect spatial frequency processing in the human visual cortex? Using both psychophysics and neuroimaging the authors conclude that adaptation induces changes in perceived spatial frequency and population receptive field size (pRF) size, depending on the adaptation state. Specifically, adapting to a low spatial frequency increases perceived spatial frequency and results in smaller pRFs, whereas adapting to a high spatial frequency decreases perceived spatial frequency and leads to broader pRFs. These results offer an explanation for previous seemingly conflicting findings regarding the effects of adaptation on size illusions and the evidence is solid; however, including a clear, direct comparison between pRF sizes in the high-adapted and low-adapted conditions would further strengthen the argument.

  2. Reviewer #1 (Public review):

    Summary:

    This paper tests the hypothesis that neuronal adaptation to spatial frequency affects the estimation of spatial population receptive field sizes as commonly measured using the pRF paradigm in fMRI. To this end, the authors modify a standard pRF setup by presenting either low or high SF (near full field) adaptation stimuli prior to the start of each run and interleaved between each pRF bar stimulus. The hypothesis states that adaptation to a specific spatial frequency (SF) should affect only a specific subset of neurons in a population (measured with an fMRI voxel), leaving the other neurons in the population intact, resulting in a shift in the tuning of the voxel in the opposite direction of the adapted stimulus (so high SF adaptation > larger pRF size and vice versa). The paper shows that this 'repelling' effect is robustly detectable psychophysically and is evident in pRF size estimates after adaptation in line with the hypothesized direction, thereby demonstrating a link between SF tuning and pRF size measurements in the human visual cortex.

    Strengths:

    The paper introduces a new experimental design to study the effect of adaptation on spatial tuning in the cortex, nicely combining the neuroimaging analysis with a separate psychophysical assessment.

    The paper includes careful analyses and transparent reporting of single-subject effects, and several important control analyses that exclude alternative explanations based on perceived contrast or signal-to-noise differences in fMRI.

    The paper contains very clear explanations and visualizations, and a carefully worded Discussion that helpfully contextualizes the results, elucidating prior findings on the effect of spatial frequency adaptation on size illusion perception.

    Weaknesses:

    The fMRI experiments consist of a relatively small sample size (n=8), of which not all consistently show the predicted pattern in all ROIs. For example, one subject shows a strong effect in the pRF size estimates in the opposite direction in V1. It's not clear if this subject is also in the psychophysical experiment and if there is perhaps a behavioral correlate of this deviant pattern. The addition of a behavioral task in the scanner testing the effect of adaptation could perhaps have helped clarify this (although arguably it's difficult to do psychophysics in the scanner). Although the effects are clearly robust at the group level here, a larger sample size could clarify how common such deviant patterns are, and potentially allow for the assessment of individual differences in adaption effects on spatial tuning as measured with fMRI, and their perceptual implications.

    The psychophysical experiment in which the perceptual effects are shown included a neutral condition, which allowed for establishing a baseline for each subject and the discovery of an asymmetry in the effects with stronger perceptual effects after high SF adaptation compared to low SF. This neutral condition was lacking in fMRI, and thus - as acknowledged - this asymmetry could not be tested at the neural level, also precluding the possibility of comparing the obtained pRF estimates to the typical ranges found using standard pRF mapping procedures (without adaptation), or to compare the SNR using in the adaptation pRF paradigm with that of a regular paradigm, etc.

    The results indicate quite some variability in the magnitude of the shift in pRF size across eccentricities and ROIs (Figure 5B). It would be interesting to know more about the sources of this variability, and if there are other effects of adaptation on the estimated retinotopic maps other than on pRF size (there is one short supplementary section on the effects on eccentricity tuning, but not polar angle).

  3. Reviewer #2 (Public review):

    The manuscript "Spatial frequency adaptation modulates population receptive field sizes" is a heroic attempt to untangle a number of visual phenomena related to spatial frequency using a combination of psychophysical experiments and functional MRI. While the paper clearly offers an interesting and clever set of measurements supporting the authors' hypothesis, my enthusiasm for its findings is somewhat dampened by the small number of subjects, high noise, and lack of transparency in the report. Despite several of the methods being somewhat heuristically and/or difficult to understand, the authors do not appear to have released the data or source code nor to have committed to doing so, and the particular figures in the paper and supplements give a view of the data that I am not confident is a complete one. If either data or source code for the analyses and figures were provided, this concern could be largely mitigated, but the explanation of the methods is not sufficient for me to be anywhere near confident that an expert could reproduce these results, even starting from the authors' data files.

    Major Concerns:

    I feel that the authors did a nice job with the writing overall and that their explanation of the topic of spatial frequency (SF) preferences and pRFs in the Introduction was quite nice. One relatively small critique is that there is not enough explanation as to how SF adaptation would lead to changes in pRF size theoretically. In a population RF, my assumption is that neurons with both small and large RFs are approximately uniformly distributed around the center of the population. (This distribution is obviously not uniform globally, but at least locally, within a population like a voxel, we wouldn't expect the small RFs to be on average nearer the voxel's center than the voxel's edges.) Why then would adaptation to a low SF (which the authors hypothesize results in higher relative responses from the neurons with smaller RFs) lead to a smaller pRF? The pRF size will not be a function of the mean of the neural RF sizes in the population (at least not the neural RF sizes alone). A signal driven by smaller RFs is not the same as a signal driven by RFs closer to the center of the population, which would more clearly result in a reduction of pRF size. The illustration in Figure 1A implies that this is because there won't be as many small RFs close to the edge of the population, but there is clearly space in the illustration for more small RFs further from the population center that the authors did not draw. On the other hand, if the point of the illustration is that some neurons will have large RFs that fall outside of the population center, then this ignores the fact that such RFs will have low responses when the stimulus partially overlaps them. This is not at all to say that I think the authors are wrong (I don't) - just that I think the text of the manuscript presents a bit of visual intuition in place of a clear model for one of the central motivations of the paper.

    The fMRI methods are clear enough to follow, but I find it frustrating that throughout the paper, the authors report only normalized R2 values. The fMRI stimulus is a very interesting one, and it is thus interesting to know how well pRF models capture it. This is entirely invisible due to the normalization. This normalization choice likely leads to additional confusion, such as why it appears that the R2 in V1 is nearly 0 while the confidence in areas like V3A is nearly 1 (Figure S2). I deduced from the identical underlying curvature maps in Figures 4 and S2 that the subject in Figure 4 is in fact Participant 002 of Figure S2, and, assuming this deduction is correct, I'm wondering why the only high R2 in that participant's V1 (per Figure S2) seems to correspond to what looks like noise and/or signal dropout to me in Figure 4. If anything, the most surprising finding of this whole fMRI experiment is that SF adaptation seems to result in a very poor fit of the pRF model in V1 but a good fit elsewhere; this observation is the complete opposite of my expectations for a typical pRF stimulus (which, in fairness, this manuscript's stimulus is not). Given how surprising this is, it should be explained/discussed. It would be very helpful if the authors showed a map of average R2 on the fsaverage surface somewhere along with a map of average normalized R2 (or maps of each individual subject).

    On page 11, the authors assert that "Figure 4c clearly shows a difference between the two conditions, which is evident in all regions." To be honest, I did not find this to be clear or evident in any of the highlighted regions in that figure, though close inspection leads me to believe it could be true. This is a very central point, though, and an unclear figure of one subject is not enough to support it. The plots in Figure 5 are better, but there are many details missing. What thresholding was used? Could the results in V1 be due to the apparently small number of data points that survive thresholding (per Figure S2)? I would very much like to see a kernel density plot of the high-adapted (x-axis) versus low-adapted (y-axis) pRF sizes for each visual area. This seems like the most natural way to evaluate the central hypothesis, but it's notably missing.

    Regarding Figure 4, I was curious why the authors didn't provide a plot of the difference between the PRF size maps for the high-adapted and low-adapted conditions in order to highlight these apparent differences for readers. So I cut the image in half (top from bottom), aligned the top and bottom halves of the figure, and examined their subtraction. (This was easy to do because the boundary lines on the figure disappear in the difference figure when they are aligned correctly.) While this is hardly a scientific analysis (the difference in pixel colors is not the difference in the data) what I noticed was surprising: There are differences in the top and bottom PRF size maps, but they appear to correlate spatially with two things: (1) blobs in the PRF size maps that appear to be noise and (2) shifts in the eccentricity maps between conditions. In fact, I suspect that the difference in PRF size across voxels correlates very strongly with the difference in eccentricity across voxels. Could the results of this paper in fact be due not to shifts in PRF size but shifts in eccentricity? Without a better analysis of the changes in eccentricity and a more thorough discussion of how the data were thresholded and compared, this is hard to say.

    While I don't consider myself an expert on psychophysics methods, I found the sections on both psychophysical experiments easy to follow and the figures easy to understand. The one major exception to this is the last paragraph of section 4.1.2, which I am having trouble following. I do not think I could reproduce this particular analysis based on the text, and I'm having a hard time imagining what kind of data would result in a particular PSE. This needs to be clearer, ideally by providing the data and analysis code.

    Overall, I think the paper has good bones and provides interesting and possibly important data for the field to consider. However, I'm not convinced that this study will replicate in larger datasets - in part because it is a small study that appears to contain substantially noisy data but also because the methods are not clear enough. If the authors can rewrite this paper to include clearer depictions of the data, such as low- and high-adapted pRF size maps for each subject, per visual-area 2D kernel density estimates of low- versus high-adapted pRF sizes for each voxel/vertex, clear R2 and normalized-R2 maps, this could be much more convincing.

  4. Reviewer #3 (Public review):

    This is a well-designed study examining an important, surprisingly understudied question: how does adaptation affect spatial frequency processing in the human visual cortex? Using a combination of psychophysics and neuroimaging, the authors test the hypothesis that spatial frequency tuning is shifted to higher or lower frequencies, depending on the preadapted state (low or high s.f. adaptation). They do so by first validating the phenomenon psychophysically, showing that adapting to 0.5 cpd stimuli causes an increase in perceived s.f., and 3.5 cpd causes a relative decrease in perceived s.f. Using the same stimuli, they then port these stimuli to a neuroimaging study, in which population receptive fields are measured under high and low spatial frequency adaptation states. They find that adaptation changes pRF size, depending on adaptation state: adapting to high s.f. led to broader overall pRF sizes across the early visual cortex, whereas adapting to low s.f. led to smaller overall pRF sizes. Finally, the authors carry out a control experiment to psychophysically rule out the possibility that the perceived contrast change w/ adaptation may have given rise to these imaging results (this doesn't appear to be the case). All in all, I found this to be a good manuscript: the writing is taut, and the study is well designed There are a few points of clarification that I think would help, though, including a little more detail about the pRF analyses carried out in this study. Moreover, one weakness is that the sample size is relatively small, given the variability in the effects.

    (1) The pRF mapping stimuli and paradigm are slightly unconventional. This is, of course, fairly necessary to assess the question at hand. But, unless I missed it, there is a potentially critical piece of the analyses that I couldn't find in the results or methods: is the to-our adapter incorporated into the inputs for the pRF analyses, or was it simply estimating pRF size in response to the pRF mapping bar? Ignoring the large, full field-ish top-up seems like it might be dismissing an important nonlinearity in RF response to that aspect of the display (including that that had different s.f. content from the mapping stimulus) -especially because it occurred 50% of the time during the pRF mapping procedure. While the bar/top-up were events sub-TR, you could still model the prfprobe+topup response, then downsample to TR level afterwards. In any case, to fully understand this, some more detail is needed here regarding the prf fitting procedure.

    (2) I appreciate the eccentricity-dependent breakdown in Figure 5b. However, it would be informative to have included the actual plots of the pRF size as a function of eccen, for the two conditions individually, in addition to the difference effects depicted in 5b.

    (3) I know the N is small for this, but did the authors take a look at whether there was any relationship between the magnitude of the psychophysical effect and the change in pRF size, per individual? This is probably underpowered but could be worth a peek.

  5. Author response:

    We thank the reviewers for their valuable comments. Our revision will address their recommendations and clarify any misconceptions. The main points we plan to amend are as follows:

    Direct comparison of pRF sizes

    We may have misunderstood this comment in the eLife assessment. We believe our original analyses and the figures already provided a “direct comparison between pRF sizes in the high-adapted and low-adapted conditions”. Specifically, we included a figure showing the histograms of pRF sizes in both conditions, and also reported statistical tests to compare conditions both within each participant and across the group. However, we now realize these comparisons might not be as clear to readers as we intended, which would explain Reviewer #2’s interpretations. To clarify, in our revised version we will instead show 2D plots comparing pRF sizes between conditions as suggested by Reviewer #2, and also show the pRF size plotted against eccentricity (rather than only the difference) as suggested by Reviewer #3.

    Data sharing

    The behavioral data, fMRI data (where ethically permissible), stimulus-generation code, statistical analyses, and fMRI stimulus video are already publicly available at the link: https://osf.io/9kfgx/. However, we unfortunately failed to include the link in the preprint. We apologize for this oversight. It will be included in the revision. The repository now also contains a script for simulated adaptation effects on pRF size used in our response to Reviewer #2. Moreover, for transparency, we will include plots of all the pRF parameter maps for all participants, including pRF size, polar angle, eccentricity, normalized R2, and raw R2.

    Sample size

    The reviewers shared concerns about the sample size of our study. We disagree that this is a weakness of our study. It is important to note that large sample sizes are not necessary to obtain conclusive results, especially when the research aims to test whether an effect exists, rather than finding out how strong the effect is on average in a population (Schwarzkopf & Huang, 2024, currently out as preprint, but in press at Psychological Methods). Our results showed robust within-subject effects, consistent across multiple visual regions in most individual participants. A larger sample size would not necessarily improve the reliability of our findings. Treating each individual as an independent replication, our results suggest a high probability that they would replicate in each additional participant we could scan.

    Reviewer #1:

    We thank the reviewer for their careful evaluation and positive comments. We will include a more detailed discussion about the issues pointed out, and an additional plot showing the polar angle for both adapter conditions. In line with previous work on the reliability of pRF estimates (van Dijk, de Haas, Moutsiana, & Schwarzkopf, 2016; Senden, Reithler, Gijsen, & Goebel, 2014), both polar angle and eccentricity maps are very stable between the two adaptation conditions.

    Reviewer #2:

    We thank the reviewer for their comments - we will improve how we report key findings which we hope will clarify matters raised by the reviewer.

    RF positions in a voxel

    The reviewer’s comments suggest that they may have misunderstood the diagram (Figure 1A) illustrating the theoretical basis of the adaptation effect, likely due to us inadvertently putting the small RFs in the middle of the illustration. We will change this figure to avoid such confusion.

    Theoretical explanation of adaptation effect

    The reviewer’s explanation for how adaptation should affect the size of pRF averaging across individual RFs is incorrect. When selecting RFs from a fixed range of semi-uniformly distributed positions (as in an fMRI voxel), the average position of RFs (corresponding to pRF position) is naturally near the center of this range. The average size (corresponding to pRF size) reflects the visual field coverage of these individual RFs. This aggregate visual field coverage thus also reflects the individual sizes. When large RFs have been adapted out, this means the visual field coverage at the boundaries is sparser, and the aggregate pRF is therefore smaller. The opposite happens when adapting out the contribution of small RFs. We demonstrate this with a simple simulation at this OSF link: https://osf.io/ebnky/.

    Figure S2

    It is not actually possible to compare R2 between regions by looking at Figure S2 because it shows the pRF size change, not R2. Therefore, the arguments Reviewer #2 made based on their interpretation of the figure are not valid. Just as the reviewer expected, V1 is one of the brain regions with good pRF model fits. In our revision, we will include normalized and raw R2 maps to make this more obvious to the readers and provide additional explanations.

    V1 appeared essentially empty in that plot primarily due to the sigma threshold we selected, which was unintentionally more conservative than those applied in our analyses and other figures. We apologize for this mistake and will correct it in the revised version by including a plot with the appropriate sigma threshold.

    Thresholding details

    Thresholding information was included in our original manuscript; however, we will include more information in the figure captions to make it more obvious.

    2D plots will replace histograms

    We thank the reviewer for this suggestion. The manuscript contained histograms showing the distribution of pRF size for both adaptation conditions for each participant and visual area (Figure S1). However, we agree that 2D plots better communicate the difference in pRF parameters between conditions, so we will replace this figure. We will consider 2D kernel density plots as suggested by the reviewer; however, such plots can obscure distributional anomalies so they may not be the optimal choice and we may opt to show transparent scatter plots of individual pRFs instead.

    (proportional) pRF size-change map

    The reviewer requests pRF size difference maps. Figure S2 in fact demonstrates the proportional difference between the pRF sizes of the two adaptation conditions. Instead of simply taking the difference, we believe showing the proportional change map is more sensible because overall pRF size varies considerably between visual regions. We will explain this more clearly in our revision.

    pRF eccentricity plot

    “I suspect that the difference in PRF size across voxels correlates very strongly with the difference in eccentricity across voxels.”

    Our manuscript already contains a supplementary plot (Figure S4 B) comparing the eccentricity between adapter conditions, showing no notable shift in eccentricities except in V3A - but that is a small region and the results are generally more variable. We will comment more on this finding in the main text and explain this figure in more detail.

    To the reviewer’s point, even if there were an appreciable shift in eccentricity between conditions (as they suggest may have happened for the example participant we showed), this does not mean that the pRF size effect is “due [...] to shifts in eccentricity.” Parameters in a complex multi-dimensional model like the pRF are not independent. There is no way of knowing whether a change in one parameter is causally linked with a change in another. We can only report the parameter estimates the model produces.

    In fact, it is conceivable that adaptation causes both: changes in pRF size and eccentricity. If more central or peripheral RFs tend to have smaller or larger RFs, respectively, then adapting out one part of the distribution will shift the average accordingly. However, as we already established, we find no compelling evidence that pRF eccentricity changes dramatically due to adaptation, while pRF size does. We will illustrate this using the 2D plots in our revision.

    Reviewer #3:

    We thank the reviewer for their comments.

    pRF model

    Top-up adapters were not modelled in our analyses because they are shared events in all TRs, critically also including the “blank” periods, providing a constant source of signal. Therefore modelling them separately cannot meaningfully change the results. However, the reviewer makes a good suggestion that it would be useful to mention this in the manuscript, so we will add a discussion of this point.

    pRF size vs eccentricity

    We will add a plot showing pRF size in the two adaptation conditions (in addition to the pRF size difference) as a function of eccentricity.

    Correlation with behavioral effect

    In the original manuscript, we pointed out why the correlation between the magnitude of the behavioral effect and the pRF size change is not an appropriate test for our data. First, the reviewer is right that a larger sample size would be needed to reliably detect such a between-subject correlation. More importantly, as per our recruitment criteria for the fMRI experiment, we did not scan participants showing weak perceptual effects. This limits the variability in the perceptual effect and makes correlation inapplicable.

    References

    van Dijk, J. A., de Haas, B., Moutsiana, C., & Schwarzkopf, D. S. (2016). Intersession reliability of population receptive field estimates. NeuroImage, 143, 293–303. https://doi.org/10.1016/J.NEUROIMAGE.2016.09.013

    Schwarzkopf, D. S., & Huang, Z. (2024). A simple statistical framework for small sample studies. BioRxiv, 2023.09.19.558509. https://doi.org/10.1101/2023.09.19.558509

    Senden, M., Reithler, J., Gijsen, S., & Goebel, R. (2014). Evaluating population receptive field estimation frameworks in terms of robustness and reproducibility. PloS One, 9(12). https://doi.org/10.1371/JOURNAL.PONE.0114054