Cortical magnification eliminates differences in contrast sensitivity across but not around the visual field

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    This important study provides a provocative potential challenge to explain sensitivity across the visual field by using cortical magnification factors. The evidence supporting this theoretical challenge is solid in general, although the inclusion of subject-specific measurements of cortical magnification factors would have strengthened the study. The work will be of interest to vision researchers of both basic and medical science.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Human visual performance changes dramatically both across (eccentricity) and around (polar angle) the visual field. Performance is better at the fovea, decreases with eccentricity, and is better along the horizontal than vertical meridian and along the lower than the upper vertical meridian. However, all neurophysiological and virtually all behavioral studies of cortical magnification have investigated eccentricity effects without considering polar angle. Most performance differences due to eccentricity are eliminated when stimulus size is cortically magnified (M-scaled) to equate the size of its cortical representation in primary visual cortex (V1). But does cortical magnification underlie performance differences around the visual field? Here, to assess contrast sensitivity, human adult observers performed an orientation discrimination task with constant stimulus size at different locations as well as when stimulus size was M-scaled according to stimulus eccentricity and polar angle location. We found that although M-scaling stimulus size eliminates differences across eccentricity, it does not eliminate differences around the polar angle. This finding indicates that limits in contrast sensitivity across eccentricity and around polar angle of the visual field are mediated by different anatomical and computational constraints.

Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    In this study, Jigo et al. measured the entire contrast sensitivity function and manipulated eccentricity and stimulus size to assess changes in contrast sensitivity and acuity for different eccentricities and polar angles. They found that CSFs decreased with eccentricity, but to a lesser extent after M scaling while compensating for striate-cortical magnification around the polar angle of the visual field did not equate to contrast sensitivity.

    In this article, the authors used classic psychophysical tests and a simple experimental design to answer the question of whether cortical magnification underlies polar angle asymmetries of contrast sensitivity. Contrast sensitivity is considered to be the most fundamental spatial vision and is important for both normal individuals and clinical patients in ophthalmology. The parametric contrast sensitivity model and the extraction of key CSF attributes help to compare the comparison of the effect of M scaling at different angles. This work can provide a new reference for the study of normal and abnormal space vision.

    The conclusions of this paper are mostly well supported by data, but some aspects of data collection and analysis need to be clarified and extended.

    1. In addition to the key CSF attributes used in this paper, the area under the CSF curve is a common, global parameter to figure out how contrast sensitivity changes under different conditions. An analysis of the area under the CSF curve is recommended.

    – We have added the area under the CSF (AULCSF) [lines 305-319, Fig 5 E-F; lines 339-343, Fig 6 E-F]. Differences for non-magnified and magnified stimuli are not eliminated.

    1. In Figure 2, CRFs are given for several SFs, but were the CRFs at the cutof-sf well-fitted? The authors should have provided the CRF results and corresponding fits to make their results more solid.

    – As reported in Fig 4A,C,E, the group data fits were very high (≥.98).

    1. The authors suggested that the apparent decrease in HVA extent at high SF may be due to the lower cutoff-SF of the perifoveal VM. Analysis of the correlation between the change in HVA and cutoff SF after M scaling may help to draw more comprehensive conclusions.

    – We have rephrased our explanation [lines 453-460]. As per your suggestion, we correlated the change in HVA and the cutoff SF after M scaling and found these correlations to be non significant.

    1. In Figure 6, it would be desirable to add panels of exact values of HVA and VMA effects for key CSF attributes at different eccentricities, as shown in Figures 4B, D, and F, to make the results more intuitive.

    – We have added these panels [FIG 6] and the corresponding analysis in the text [lines 321-343]

    1. More discussions are needed to interpret the results. 1) Due to the different testing distances in VM and HM, their retinae will be in a different adaptation state, making any comparison between VM and HM tricky. The author should have added a discussion on this issue.

    – Note that the mean luminance of the display (from retina to monitor) was 23 cd/m2 at 57cm and 19 cd/m2 at 115 cm. The pupil size difference for these two conditions is relatively small (< 0.5 mm) and should not significantly affect contrast sensitivity (Rahimi-Nasrabadi et al., 2021) [lines 483-491]. Moreover, the differences we get here are consistent with the asymmetries we (e.g., Carrasco, Talgar & Cameron, 2001; Cameron, Tai & Carrasco, 2002; Fuller, Park & Carrasco, 2009; Abrams, Nizam & Carrasco, 2012; Corbett & Carrasco, 2012; Himmelberg, Winawer & Carrasco, 2020) and many others (e.g., Baldwin et al., 2012; Pointer & Hess, 1989; Regan and Beverley, 1983; Rijsdijk et al., 1980; Robson and Graham, 1981; Rosén et al., 2014; Silva et al., 2008) have observed for contrast sensitivity when the vertical and horizontal meridian are tested simultaneously at the same distance.

    1. In Figure 4, the HVA extent appears to change after M-scaling, although the analysis shows that M-scaling only affects the HVA extent at high SF. In contrast, the range of VMA was almost unchanged. The authors could have discussed more how the HVA and VMA effects behave differently after M-scaling.

    – We had commented on this pattern and have further clarified it [lines 436-451]

    1. The results in Figure 4 also show that at 11.3 cpd, the measurement may be inaccurate. This might lead to an inaccurate estimate of the M scaling effect at 11.3 cpd. The authors should discuss this issue more.

    – We have explained why this data point is at chance [FIG 4 caption]

    1. The different neural image-processing capabilities among locations, which is referred to as the "Qualitative hypothesis", is the main hypothesis explaining the differences around the polar angle of the visual field. To help the reader better understand this concept, the author should provide further discussions.

    – We have expanded the discussion of the qualitative hypothesis of differences in polar angle (lines 86-92; lines 476-481).

    1. The authors should also provide more details about their measures. For example, high grayscale is crucial in contrast sensitivity measurements, and the authors should clarify whether the monitor was calibrated with high grayscale or only with 8-bit. Since the main experiment was measuring CS at different locations, it should also be clarified whether the global uniformity of the display was calibrated.

    – The monitor was calibrated with 8-bit at the center of the display [lines 607].

    – Regarding global uniformity, although we only calibrated at the center of the display, please note that the asymmetries are not due to the particular monitor we used. We have obtained these asymmetries in contrast sensitivity in numerous studies using multiple monitors over 20 years (e.g., Carrasco, Talgar & Cameron, 2001; Cameron, Tai & Carrasco, 2002; Fuller, Park & Carrasco, 2009; Abrams, Nizam & Carrasco, 2012; Corbett & Carrasco, 2012; Hanning et al., 2022a; Himmelberg et al., 2020) and other groups have reported these visual asymmetries as well (Baldwin et al., 2012; Pointer and Hess, 1989; Rosén et al., 2014). Also important, as we had mentioned in the Introduction [lines 55-59], the HVA and VMA asymmetries shift in-line with egocentric referents, corresponding to the retinal location of the stimulus, not with the allocentric location (Corbett & Carrasco, 2011).

    1. In addition, their method of data analysis relies on parametric contrast sensitivity model fitting. One of the concerns is whether there are enough trials for each SF to measure the threshold. The authors should have included in their method the number of trials corresponding to each SF in each CSF curve.

    – We have specified number of trials [lines 637-644]

    Reviewer #2 (Public Review):

    This is an interesting manuscript that explores the hypothesis that inhomogeneities in visual sensitivity across the visual field are not solely driven by cortical magnification factors. Specifically, they examine the possibility that polar angle asymmetries are subserved by differences not necessarily related to the neural density of representation. Indeed, when stimuli were cortically magnified, pure eccentricity-related differences were minimized, whereas applying that same cortical magnification factor had less of an effect on mitigating polar angle visual field anisotropies. The authors interpret this as evidence for qualitatively distinct neural underpinnings. The question is interesting, the manuscript is well written, and the methods are well executed.

    1. The crux of the manuscript appears to lean heavily on M-scaling constants, to determine how much to magnify the stimuli. While this does appear to do a modest job compensating for eccentricity effects across some spatial frequencies within their subject pool, it of course isn't perfect. But what I am concerned about is the degree to which the M-scaling that is then done to adjust for presumed cortical magnification across meridians is precise enough to rely on entirely to test their hypothesis. That is, do the authors know whether the measures of cortical magnification across a polar angle that are used to magnify these stimuli are as reliable across subjects as they tend to be for eccentricity alone? If not, then to what degree can we trust the M-scaled manipulation here? In an ideal world, the authors could have empirically measured cortical surface area for their participants, using a combination of retinotopy and surface-based measures, and precisely compensated for cortical magnification, per subject. It would be helpful if the authors better unpacked the stability across subjects for their cortical magnification regime across polar angles.

    –– We note that the equations by Rovamo and Virsu are commonly used to cortically magnify stimulus size. This paper has many citations, and the conclusions of many studies are based on those calculations [lines 115-128].

    –– In response to Rev’s 3 comment, “In lieu of carrying out new measurements, it could also suffice to compare individual cortical magnification factors to the performance to quantify the contribution to the psychophysical performance”, we found a significant correlation between the surface area and contrast sensitivity measures at the horizontal, upper-vertical and lower-vertical meridians. However, we found no significant correlation between the cortical surface with the difference in contrast sensitivity for fixed-size and magnified stimuli at 6 deg at each meridian. These findings suggest that surface area plays a role but that individual magnification is unlikely to equalize contrast sensitivity [lines 366-380; Fig 7; lines 511-529].

    1. Related to this previous point, the description of the cortical magnification component of the methods, which is quite important, could be expanded on a bit more, or even placed in the body of the main text, given its importance. Incidentally, it was difficult to figure out what the references were in the Methods because they were indexed using a numbering system (formatted for perhaps a different journal), so I could only make best guesses as to what was being referred to in the Methods. This was particularly relevant for model assumptions and motivation.

    –– We now detail M-scaling in the Introduction [lines 115-135], and we have fixed the references in the Methods section.

    1. Another methodological aspect of the study that was unclear was how the fitting worked. The authors do a commendably thorough job incorporating numerous candidate CSF models. However, my read on the methods description of the fitting procedure was that each participant was fitted with all the models, and the best model was then used to test the various anisotropy models afterwards. What was the motivation for letting each individual have their own qualitatively distinct CSF model? That seems rather unusual.

    Related to this, while the peak of the CSF is nicely sampled, there was a lack of much data in the cutoff at higher spatial frequencies, which at least in the single subject data that was shown made the cutoff frequency measure seem like it would be unreliable. Did the authors find that to be an issue in fitting the data?

    –– We have further clarified that we fit all 9 models to the grouped data [lines 177-178] and in Methods [lines 693, 716, 725], and that the fit in Figure 3 corresponds to the grouped data [Fig 3 caption]. As reported in Fig 4A,C,E, the group data fits were very high (≥.98). Please note that the cutoff spatial frequency is reliable. The data point (11.3 cpd) in the differences which does not follow the same function (Fig 4D,F) reflects the fact that for both magnified and not-magnified stimuli, performance was at chance, consistent with the fact that high SF are harder to discriminate at peripheral locations [Fig 4 caption].

    1. The manuscript concludes that cortical magnification is insufficient to explain the polar angle inhomogeneities in perceptual sensitivity. However, there is little discussion of what the authors believe may actually underlie these effects then. It would be productive if they could offer some possible explanation.

    –– We have expanded the discussion of qualitative hypothesis of differences in polar angle [lines 86-92; lines 476-481].

    –– We have expanded the discussion of possible mechanisms [lines 496-529].

    –– We have explained why having assessed the VM and HM and different distances does not significantly influence our measures [lines 483-491].

    –– We have expanded the discussion of how the HVA and VMA effects behave differently after M-scaling [lines 435-450].

    –– We have clarified that the fits are reliable and made explicit that the highest SF data point is at chance in both conditions [FIG 4 caption].

    Reviewer #3 (Public Review):

    Jigo, Tavdy & Carrasco used visual psychophysics to measure contrast sensitivity functions across the visual field, varying not only the distance from fixation (eccentricity) but also the angular position (meridian). Both parameters have been shown to affect visual sensitivity: spatial visual acuities generally fall off with eccentricity, it is now widely accepted that it is superior along the horizontal than the vertical meridian, and there may also be differences between the upper and lower visual field, although this anisotropy is typically less pronounced. The eccentricity-dependent decrease in performance is thought to be due to reduced cortical magnification in peripheral compared to central vision; that is, the amount of brain tissue devoted to mapping a fixed amount of visual space. The authors, therefore, include a crucial experimental condition in which they scale the size of their stimuli to account for reduced cortical magnification. They find that while this corrects for reduced performance related to stimulus eccentricity, it does not fully explain the variation in performance at different visual field meridians. They argue that this suggests other neural mechanisms than cortical magnification alone underlie this intra-individual variability in visual perception.

    The experiments are done to an extremely high technical standard, the analysis is sound, and the writing is very clear. The main weakness is that as it stands the argument against cortical magnification as the factor driving this meridional variability in visual performance is not entirely convincing. The scaling of stimulus size is based on estimates in previous studies. There are two issues with this: First, these studies are all quite old and therefore used methods that cannot be considered state-of-the-art anymore. In turn, the estimates of cortical magnification may be a poor approximation of actual differences in cortical magnification between meridians.

    –– We note that the equations by Rovamo and Virsu are commonly used to cortically magnify stimulus size. This paper has many citations, and the conclusions of many studies are based on those calculations [lines 115-128].

    –– In response to Rev’s 3 comment, “In lieu of carrying out new measurements, it could also suffice to compare individual cortical magnification factors to the performance to quantify the contribution to the psychophysical performance”, we found a significant correlation between the surface area and contrast sensitivity measures at the horizontal, upper-vertical and lower-vertical meridians. However, we found no significant correlation between the cortical surface with the difference in contrast sensitivity for fixed-size and magnified stimuli at 6 deg at each meridian. These findings suggest that surface area plays a role but that individual magnification is unlikely to equalize contrast sensitivity [lines 366-380; Fig 7; lines 511-529].

    Second, we now know that this intra-individual variability is rather idiosyncratic (and there could be a wider discussion of previous literature on this topic). Since these meridional differences, especially between upper and lower hemifields, are relatively weak compared to the variance, a scaling factor based on previous data may simply not adequately correct these differences. In fact, the difference in scaling used for the upper and lower vertical meridian is minute, 7.7 vs 7.68 degrees of visual angle, respectively. This raises the question of whether such a small difference could really have affected performance.

    That said, there have been reports of meridional differences in the spatial selectivity of the human visual cortex (Moutsiana et al., 2016; Silva et al., 2017) that may not correspond one-to-one with cortical magnification. This could be a neural substrate for the differences reported here. This possibility could also be tested with their already existing neurophysiological data. Or perhaps, there could be as-yet undiscovered differences in the visual system, e.g., in terms of the distribution of cells between the ventral and dorsal retina. As such, the data shown here are undoubtedly significant and these possibilities are worth considering. If the authors can address this critique either by additional experiments, analyses, or by an explanation of why this cannot account for their results, this would strengthen their current claims; alternatively, the findings would underline the importance of these idiosyncrasies in the visual cortex.

    We now include discussion of the different points that the reviewer raised here in our new section 'What mechanism might underlie perceptual polar angle asymmetries' [lines 497-530].

  2. eLife assessment

    This important study provides a provocative potential challenge to explain sensitivity across the visual field by using cortical magnification factors. The evidence supporting this theoretical challenge is solid in general, although the inclusion of subject-specific measurements of cortical magnification factors would have strengthened the study. The work will be of interest to vision researchers of both basic and medical science.

  3. Reviewer #1 (Public Review):

    In this study, Jigo et al. measured the entire contrast sensitivity function and manipulated eccentricity and stimulus size to assess changes in contrast sensitivity and acuity for different eccentricities and polar angles. They found that CSFs decreased with eccentricity, but to a lesser extent after M scaling while compensating for striate-cortical magnification around the polar angle of the visual field did not equate to contrast sensitivity.

    In this article, the authors used classic psychophysical tests and a simple experimental design to answer the question of whether cortical magnification underlies polar angle asymmetries of contrast sensitivity. Contrast sensitivity is considered to be the most fundamental spatial vision and is important for both normal individuals and clinical patients in ophthalmology. The parametric contrast sensitivity model and the extraction of key CSF attributes help to compare the comparison of the effect of M scaling at different angles. This work can provide a new reference for the study of normal and abnormal space vision.

    The conclusions of this paper are mostly well supported by data, but some aspects of data collection and analysis need to be clarified and extended. 1) In addition to the key CSF attributes used in this paper, the area under the CSF curve is a common, global parameter to figure out how contrast sensitivity changes under different conditions. An analysis of the area under the CSF curve is recommended. 2) In Figure 2, CRFs are given for several SFs, but were the CRFs at the cutof-sf well-fitted? The authors should have provided the CRF results and corresponding fits to make their results more solid. 3) The authors suggested that the apparent decrease in HVA extent at high SF may be due to the lower cutoff-SF of the perifoveal VM. Analysis of the correlation between the change in HVA and cutoff SF after M scaling may help to draw more comprehensive conclusions. 4) In Figure 6, it would be desirable to add panels of exact values of HVA and VMA effects for key CSF attributes at different eccentricities, as shown in Figures 4B, D, and F, to make the results more intuitive.

    More discussions are needed to interpret the results. 1) Due to the different testing distances in VM and HM, their retinae will be in a different adaptation state, making any comparison between VM and HM tricky. The author should have added a discussion on this issue. 2) In Figure 4, the HVA extent appears to change after M-scaling, although the analysis shows that M-scaling only affects the HVA extent at high SF. In contrast, the range of VMA was almost unchanged. The authors could have discussed more how the HVA and VMA effects behave differently after M-scaling. 3) The results in Figure 4 also show that at 11.3 cpd, the measurement may be inaccurate. This might lead to an inaccurate estimate of the M scaling effect at 11.3 cpd. The authors should discuss this issue more. 4) The different neural image-processing capabilities among locations, which is referred to as the "Qualitative hypothesis", is the main hypothesis explaining the differences around the polar angle of the visual field. To help the reader better understand this concept, the author should provide further discussions.

    The authors should also provide more details about their measures. For example, high grayscale is crucial in contrast sensitivity measurements, and the authors should clarify whether the monitor was calibrated with high grayscale or only with 8-bit. Since the main experiment was measuring CS at different locations, it should also be clarified whether the global uniformity of the display was calibrated. In addition, their method of data analysis relies on parametric contrast sensitivity model fitting. One of the concerns is whether there are enough trials for each SF to measure the threshold. The authors should have included in their method the number of trials corresponding to each SF in each CSF curve.

  4. Reviewer #2 (Public Review):

    This is an interesting manuscript that explores the hypothesis that inhomogeneities in visual sensitivity across the visual field are not solely driven by cortical magnification factors. Specifically, they examine the possibility that polar angle asymmetries are subserved by differences not necessarily related to the neural density of representation. Indeed, when stimuli were cortically magnified, pure eccentricity-related differences were minimized, whereas applying that same cortical magnification factor had less of an effect on mitigating polar angle visual field anisotropies. The authors interpret this as evidence for qualitatively distinct neural underpinnings. The question is interesting, the manuscript is well written, and the methods are well executed.

    1. The crux of the manuscript appears to lean heavily on M-scaling constants, to determine how much to magnify the stimuli. While this does appear to do a modest job compensating for eccentricity effects across some spatial frequencies within their subject pool, it of course isn't perfect. But what I am concerned about is the degree to which the M-scaling that is then done to adjust for presumed cortical magnification across meridians is precise enough to rely on entirely to test their hypothesis. That is, do the authors know whether the measures of cortical magnification across a polar angle that are used to magnify these stimuli are as reliable across subjects as they tend to be for eccentricity alone? If not, then to what degree can we trust the M-scaled manipulation here? In an ideal world, the authors could have empirically measured cortical surface area for their participants, using a combination of retinotopy and surface-based measures, and precisely compensated for cortical magnification, per subject. It would be helpful if the authors better unpacked the stability across subjects for their cortical magnification regime across polar angles.

    2. Related to this previous point, the description of the cortical magnification component of the methods, which is quite important, could be expanded on a bit more, or even placed in the body of the main text, given its importance. Incidentally, it was difficult to figure out what the references were in the Methods because they were indexed using a numbering system (formatted for perhaps a different journal), so I could only make best guesses as to what was being referred to in the Methods. This was particularly relevant for model assumptions and motivation.

    3. Another methodological aspect of the study that was unclear was how the fitting worked. The authors do a commendably thorough job incorporating numerous candidate CSF models. However, my read on the methods description of the fitting procedure was that each participant was fitted with all the models, and the best model was then used to test the various anisotropy models afterwards. What was the motivation for letting each individual have their own qualitatively distinct CSF model? That seems rather unusual. Related to this, while the peak of the CSF is nicely sampled, there was a lack of much data in the cutoff at higher spatial frequencies, which at least in the single subject data that was shown made the cutoff frequency measure seem like it would be unreliable. Did the authors find that to be an issue in fitting the data?

    4. The manuscript concludes that cortical magnification is insufficient to explain the polar angle inhomogeneities in perceptual sensitivity. However, there is little discussion of what the authors believe may actually underlie these effects then. It would be productive if they could offer some possible explanation.

  5. Reviewer #3 (Public Review):

    Jigo, Tavdy & Carrasco used visual psychophysics to measure contrast sensitivity functions across the visual field, varying not only the distance from fixation (eccentricity) but also the angular position (meridian). Both parameters have been shown to affect visual sensitivity: spatial visual acuities generally fall off with eccentricity, it is now widely accepted that it is superior along the horizontal than the vertical meridian, and there may also be differences between the upper and lower visual field, although this anisotropy is typically less pronounced. The eccentricity-dependent decrease in performance is thought to be due to reduced cortical magnification in peripheral compared to central vision; that is, the amount of brain tissue devoted to mapping a fixed amount of visual space. The authors, therefore, include a crucial experimental condition in which they scale the size of their stimuli to account for reduced cortical magnification. They find that while this corrects for reduced performance related to stimulus eccentricity, it does not fully explain the variation in performance at different visual field meridians. They argue that this suggests other neural mechanisms than cortical magnification alone underlie this intra-individual variability in visual perception.

    The experiments are done to an extremely high technical standard, the analysis is sound, and the writing is very clear. The main weakness is that as it stands the argument against cortical magnification as the factor driving this meridional variability in visual performance is not entirely convincing. The scaling of stimulus size is based on estimates in previous studies. There are two issues with this: First, these studies are all quite old and therefore used methods that cannot be considered state-of-the-art anymore. In turn, the estimates of cortical magnification may be a poor approximation of actual differences in cortical magnification between meridians. Second, we now know that this intra-individual variability is rather idiosyncratic (and there could be a wider discussion of previous literature on this topic). Since these meridional differences, especially between upper and lower hemifields, are relatively weak compared to the variance, a scaling factor based on previous data may simply not adequately correct these differences. In fact, the difference in scaling used for the upper and lower vertical meridian is minute, 7.7 vs 7.68 degrees of visual angle, respectively. This raises the question of whether such a small difference could really have affected performance.

    That said, there have been reports of meridional differences in the spatial selectivity of the human visual cortex (Moutsiana et al., 2016; Silva et al., 2017) that may not correspond one-to-one with cortical magnification. This could be a neural substrate for the differences reported here. This possibility could also be tested with their already existing neurophysiological data. Or perhaps, there could be as-yet undiscovered differences in the visual system, e.g., in terms of the distribution of cells between the ventral and dorsal retina. As such, the data shown here are undoubtedly significant and these possibilities are worth considering. If the authors can address this critique either by additional experiments, analyses, or by an explanation of why this cannot account for their results, this would strengthen their current claims; alternatively, the findings would underline the importance of these idiosyncrasies in the visual cortex.