Rat sensitivity to multipoint statistics is predicted by efficient coding of natural scenes

Curation statements for this article:
  • Curated by eLife

    eLife logo

    Evaluation Summary:

    This research will be of interest to neuroscientists who want to understand how visual systems are tuned to and encode natural scenes. It reports that rats share phenomenology with humans in sensitivity to spatial correlations in scenes. This work shows that an earlier paper's hypothesis about efficient coding may be broadly applicable, but it is perhaps most interesting in opening up the possibility of studying this sort of visual tuning in an animal where invasive techniques can be used to study this sensitivity and its development.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. The reviewers remained anonymous to the authors.)

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Efficient processing of sensory data requires adapting the neuronal encoding strategy to the statistics of natural stimuli. Previously, in Hermundstad et al., 2014, we showed that local multipoint correlation patterns that are most variable in natural images are also the most perceptually salient for human observers, in a way that is compatible with the efficient coding principle. Understanding the neuronal mechanisms underlying such adaptation to image statistics will require performing invasive experiments that are impossible in humans. Therefore, it is important to understand whether a similar phenomenon can be detected in animal species that allow for powerful experimental manipulations, such as rodents. Here we selected four image statistics (from single- to four-point correlations) and trained four groups of rats to discriminate between white noise patterns and binary textures containing variable intensity levels of one of such statistics. We interpreted the resulting psychometric data with an ideal observer model, finding a sharp decrease in sensitivity from two- to four-point correlations and a further decrease from four- to three-point. This ranking fully reproduces the trend we previously observed in humans, thus extending a direct demonstration of efficient coding to a species where neuronal and developmental processes can be interrogated and causally manipulated.

Article activity feed

  1. Author Response:

    Reviewer #1 (Public Review):

    This research follows up on prior work showing that human visual pattern discriminability is closely related to the statistical features of natural scenes. The present work developed a behavioral choice paradigm to test whether rats could discriminate between patterns, and then measured their sensitivity to different spatial correlation structures. This allowed them to test whether rats possess the same sensitivity to spatial correlation patterns as had been observed in human psychophysical experiments. The experiments found that the ordering of the sensitivity to spatial correlation patterns matched that measured in humans and follows the frequency of different structures in natural scenes, in accordance with an efficient coding hypothesis.

    This work has a strong theoretical grounding. The behavioral experiments are well executed and the results show convincingly that rat behavior follows the same pattern as measured in humans. One strength of this data is that is shows that the order of presenting of these patterns during training did not matter for the eventual relative sensitivity measured in the rats.

    This research opens up the opportunity to test whether the correlational sensitivities can be altered by changing visual environments, and if so, what neural substrates might be plastic in these cases.

    We thank the reviewer for their enthusiastic support. In our revised manuscript, we have redesigned Figure 4 (i.e., our former Figure 3) to highlight even better the fact that the relative sensitivity to different patterns was largely independent of the training of the animal.

    Reviewer #2 (Public Review):

    Caramellino et al., investigated whether rat sensitivity to multipoint correlations show a similar rank order as observed in humans. They show that rat sensitivity to multipoint correlations exhibits a rank order similar to what was shown in Hermundstad et al., 2014. Interestingly, they also show that such rank order is robust within-group and within-subject. The authors further claim that this similarity indicates that rat sensitivity to multipoint correlations follows efficient coding of natural scenes.

    The main conclusion of the paper is mostly supported by the data. However, the presentation of results may benefit from some points of clarification:

    1. In Hermundstad et al., 2014, the degree of variation in images themselves (Figure 1E) as well as perceptual sensitivity comparison (Figure 2D and 3A) was done among: 2nd-order horizontal/vertical edges, 2nd-order diagonal edges,-shaped 3rd-order correlations, and 4th-order correlations. However, the comparisons here are: first-order, all 2nd-order correlations including ALL horizontal/vertical/diagonal edges, L-shape 3rd-order correlations, and 4th-order correlations. It is unclear how these two rank-order results are parallel given that Hermundstad et al., 2014 did not include 1st-order at all.

    First of all, let us clarify some confusion that seems to exist about the identity of the texture patterns that we tested in the experiment. We did not, as the Reviewer seems to imply, test (and pool over) ALL 2nd order and ALL 3rd order correlations. What we did was to choose one pattern for each order, namely the horizontal 2-point pattern and the “bottom right pointing” 3-point pattern, to build our stimulus set. We apologize for not explaining this clearly enough in the initial version of our text. We now state this more explicitly (lines 88-114), and we discuss at length the rationale behind our choice of experimental patterns (beyond the aforementioned passage in the Results, also in the Discussion, lines 190-256, and the Methods, lines 304-333).

    Regarding the inclusion of the 1st order statistic, is true that it was only studied in Victor and Conte 2012, which only contained phychophysics data, and not in Hermundstad et al 2014, which connected psychophysics with natural image statistics. Indeed, it is not possible to analyze the variability of γ in natural images with the method established by Hermundstad et al 2014, because each image is binarized in such a way to guarantee that γ=0 by construction. In this sense, like the use of qualitative ranking discussed more at length below, gamma was included to better reflect the approach in Victor and Conte. Moreover, we wanted to include a sensory stimulus condition that we were sure the animals could detect well, in order to ensure that any failure to learn or perform the task was due to limitations in sensory processing and not in the learning or decision-making process. Before performing our experiments, the only statistic that we were confident the rats could be trained to distinguish from noise was gamma [Tafazoli et al 2017, Vascon et al 2019], and therefore it made sense to include it in the experimental design. We have modified the Results (lines 90-93, 104-108), the Methods (316-321) and the Discussion (212-218) to express this point more clearly.

    1. In Hermundstad et al., 2014, the paper emphasized that the difference of perceptual sensitivity between horizontal/vertical edges and diagonal edges is not merely an "oblique effect": Horizontal and vertical pairwise correlation share an edge, while pixels involved in diagonal pairwise correlation only share a corner. One wonders whether rats show any sensitivity difference between horizontal/vertical edges and diagonal edges. The manuscript in its current form misses this important comparison. Without showing this, the rat sensitivity does not fully reproduce the trend previously observed in humans.

    When designing our experiment, we prioritized collecting data for the other statistics as they were closer to the extremes of the measured sensitivity values, therefore offering a clearer signal for a comparison with rat data. For instance, had we found better sensitivity to 3- or 4-point statistics than to (horizontal) 2-point statistics, this would have been a very clear sign that perceptual sensitivity in rat is organized differently than in humans. Conversely, we reasoned that a comparison based on 2-point diagonal instead of 2-point horizontal would have been more easily muddled and made inconclusive by the experimental noise that we expected to observe in rats. We agree that, given the high precision of the quantitative match between rats, humans and image statistics now highlighted by the new Fig. 3, it would be interesting to test rats also for their sensitivity to diagonal 2-point correlations and check whether they matched the pattern exhibited by humans. However, as the editor rightly surmises, acquiring new data at this stage would indeed be exceedingly time consuming. Therefore, we have modified the text to better highlight that we did not seek to replicate this particular result in Hermundstad et al 2014 (as well as that we could not test as many correlation patterns as in Hermundstad et al 2014 more generally, due to practical and ethical constraints). We also note that, since we did not test 2-point diagonal, we can’t draw conclusions similar to those in Hermundstad 2014 about the difference of an effect due to efficient coding and one due to a hypothetical oblique effect for the specific 2-point horizontal vs. diagonal comparison. These points are now all brought up in the Discussion of our revised manuscript (lines 189-208). It is also worth noting that the oblique effect was a minor point of the Hermundstad et al. paper and the main arguments did not hinge on it.

    1. Combining 1) and 2), it is unclear why the ranking in rat sensitivity is evidence for efficient coding. In Hermundstad et al., 2014, efficient coding was established by comparing the image-based precision matrix with the human perceptual isodiscrimination contours. There is no such comparison here.
    1. If the authors would like to hypothesize that the rat sensitivity shows efficient coding simply because its ranking is similar to humans, more needs to be done to shore up the quantitative comparison between the two.

    In response to points 3 and 4: We thank the reviewer for underscoring the difference between, on the one hand, a quantitative comparison of the sensitivity to the variance of the statistics in natural images, and on the other hand a more qualitative comparison of their rank ordering. Besides our answers to point 1 and 2 above, we wish now to address specifically this important distinction. In our initial submission, we built our argument based on the rankings in order to better connect not only with Hermundstad et al 2014, but also with earlier human psychophysics results on the same task (Victor and Conte 2012), where there was no comparison with natural image statistics and therefore only the qualitative ranking among sensitivities was examined. We also note that Hermundstad et al do, in fact, make ample use of the rank-ordering agreement between natural image statistics and human sensitivity in order to support their argument (“rank-order”, or similar locutions, are used six times between the results and the discussion). In this sense, while it is true that “In Hermundstad et al., 2014, efficient coding was established by comparing the image-based precision matrix with the human perceptual isodiscrimination contours”, it is also true that the rank ordering was presented as part of the evidence for efficient coding.

    Having said this, we nevertheless agree that our argument can be strengthened by presenting both approaches, qualitative and quantitative. We have now added a new figure (Fig. 3), where we compare our estimates of psychophysical sensitivity in rats with the corresponding values for human psychophysics and natural image statistics reported in Hermunstad 2014 (note that we were only able to compare three out of the four statistics that we tested, because – as the reviewer themselves noted in a previous comment – Hermunstad et al. did not consider 1-point correlations). The comparisons in Fig. 3 (and the related quantitative measures reported in the text, lines 146-160) reveal a strong quantitative match, similar to that between the human psychophysics and the image statistic data.

  2. Evaluation Summary:

    This research will be of interest to neuroscientists who want to understand how visual systems are tuned to and encode natural scenes. It reports that rats share phenomenology with humans in sensitivity to spatial correlations in scenes. This work shows that an earlier paper's hypothesis about efficient coding may be broadly applicable, but it is perhaps most interesting in opening up the possibility of studying this sort of visual tuning in an animal where invasive techniques can be used to study this sensitivity and its development.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. The reviewers remained anonymous to the authors.)

  3. Reviewer #1 (Public Review):

    This research follows up on prior work showing that human visual pattern discriminability is closely related to the statistical features of natural scenes. The present work developed a behavioral choice paradigm to test whether rats could discriminate between patterns, and then measured their sensitivity to different spatial correlation structures. This allowed them to test whether rats possess the same sensitivity to spatial correlation patterns as had been observed in human psychophysical experiments. The experiments found that the ordering of the sensitivity to spatial correlation patterns matched that measured in humans and follows the frequency of different structures in natural scenes, in accordance with an efficient coding hypothesis.

    This work has a strong theoretical grounding. The behavioral experiments are well executed and the results show convincingly that rat behavior follows the same pattern as measured in humans. One strength of this data is that is shows that the order of presenting of these patterns during training did not matter for the eventual relative sensitivity measured in the rats.

    This research opens up the opportunity to test whether the correlational sensitivities can be altered by changing visual environments, and if so, what neural substrates might be plastic in these cases.

  4. Reviewer #2 (Public Review):

    Caramellino et al., investigated whether rat sensitivity to multipoint correlations show a similar rank order as observed in humans. They show that rat sensitivity to multipoint correlations exhibits a rank order similar to what was shown in Hermundstad et al., 2014. Interestingly, they also show that such rank order is robust within-group and within-subject. The authors further claim that this similarity indicates that rat sensitivity to multipoint correlations follows efficient coding of natural scenes.

    The main conclusion of the paper is mostly supported by the data. However, the presentation of results may benefit from some points of clarification:

    1. In Hermundstad et al., 2014, the degree of variation in images themselves (Figure 1E) as well as perceptual sensitivity comparison (Figure 2D and 3A) was done among: 2nd-order horizontal/vertical edges, 2nd-order diagonal edges,-shaped 3rd-order correlations, and 4th-order correlations. However, the comparisons here are: first-order, all 2nd-order correlations including ALL horizontal/vertical/diagonal edges, L-shape 3rd-order correlations, and 4th-order correlations. It is unclear how these two rank-order results are parallel given that Hermundstad et al., 2014 did not include 1st-order at all.

    2. In Hermundstad et al., 2014, the paper emphasized that the difference of perceptual sensitivity between horizontal/vertical edges and diagonal edges is not merely an "oblique effect": Horizontal and vertical pairwise correlation share an edge, while pixels involved in diagonal pairwise correlation only share a corner. One wonders whether rats show any sensitivity difference between horizontal/vertical edges and diagonal edges. The manuscript in its current form misses this important comparison. Without showing this, the rat sensitivity does not fully reproduce the trend previously observed in humans.

    3. Combining 1) and 2), it is unclear why the ranking in rat sensitivity is evidence for efficient coding. In Hermundstad et al., 2014, efficient coding was established by comparing the image-based precision matrix with the human perceptual isodiscrimination contours. There is no such comparison here.

    4. If the authors would like to hypothesize that the rat sensitivity shows efficient coding simply because its ranking is similar to humans, more needs to be done to shore up the quantitative comparison between the two.