Mice and primates use distinct strategies for visual segmentation

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    Primates perceive and detect stimuli differently depending on the stimulus context in which they are embedded, and have a remarkable ability to understand the way in which objects and parts of objects are embedded in natural scenes (scene segmentation). An example of this is figure-ground segmentation, a well documented phenomenon resulting in a "pop-out" of a figure region and corresponding enhanced neural firing rates in visual areas. It is unknown whether mice show similar behavioral and neural signatures as primates. The present study suggests that mice show different segmentation behavior than primates, lacking texture-invariant segmentation of figures and corresponding neural correlates. This reveals a fundamental difference between primates and mice important for researchers working on these species and researchers studying scene segmentation. The findings are further interpreted in terms of neural network architectures (feedforward networks) and are relevant for this field too.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

The rodent visual system has attracted great interest in recent years due to its experimental tractability, but the fundamental mechanisms used by the mouse to represent the visual world remain unclear. In the primate, researchers have argued from both behavioral and neural evidence that a key step in visual representation is ‘figure-ground segmentation’, the delineation of figures as distinct from backgrounds. To determine if mice also show behavioral and neural signatures of figure-ground segmentation, we trained mice on a figure-ground segmentation task where figures were defined by gratings and naturalistic textures moving counterphase to the background. Unlike primates, mice were severely limited in their ability to segment figure from ground using the opponent motion cue, with segmentation behavior strongly dependent on the specific carrier pattern. Remarkably, when mice were forced to localize naturalistic patterns defined by opponent motion, they adopted a strategy of brute force memorization of texture patterns. In contrast, primates, including humans, macaques, and mouse lemurs, could readily segment figures independent of carrier pattern using the opponent motion cue. Consistent with mouse behavior, neural responses to the same stimuli recorded in mouse visual areas V1, RL, and LM also did not support texture-invariant segmentation of figures using opponent motion. Modeling revealed that the texture dependence of both the mouse’s behavior and neural responses could be explained by a feedforward neural network lacking explicit segmentation capabilities. These findings reveal a fundamental limitation in the ability of mice to segment visual objects compared to primates.

Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    The authors present a study of figure-ground segregation in different species. Figure-ground segregation is an important mechanism for the establishment of an accurate 3D model of the environment. The authors examine whether figure-ground segregation occurs in mice in a similar manner to that reported in primates and compare results to two other species (Tree shrews and mouse lemurs). They use both behavioral measures and electrophysiology/twophoton imaging to show that mice and tree shrews do not use opponent motion signals to segregate the visual scene into objects and background whereas mouse lemurs and macaque monkeys do. This information is of great importance for understanding to what extent the rodent visual system is a good model for primate vision and the use of multiple species is highly revealing for understanding the development of figure-ground segregation through evolution.

    The behavioral data is of high quality. I would add one caveat: it seems unfair to report that the tree shrews could not generalize the opponent motion stimulus as it seems they struggled to learn it in the first place. Their performance was below 60% on the training data and they weren't trained for many sessions in comparison to the mice. Perhaps with more training the tree-shrews might have attained higher performance on the textures and this would allow a more sensitive test of generalization. The authors should qualify their statements about the treeshrews to reflect this issue.

    The reviewer is correct in this assertion. For context, we performed the mouse experiments first and were hoping to see texture-invariant performance but instead realized that the mice were resorting to memorizing patterns. With this in mind, when expanding to treeshrews we wanted to prevent this type of learning to really test whether texture invariant recognition was possible, thus we increased the number of orientations tested to 5, resulting in 10 possible textures that would have to be memorized in contrast to the 4 that had to be memorized for the mice. We now clarify this in the text:

    “We reversed the number of train/test patterns compared to what was used for the mice (Fig. 2i1) because we reasoned that animals might be more likely to generalize if given more patterns for training. We had performed the mouse experiments initially, noticed the memorization approach, and were trying to avoid this behavior in treeshrews. This also means that the naturalistic train condition presented to treeshrews was harder than that for mice (5 orientations for treeshrews vs. 2 orientations for mice in the training set).”

    Reviewer #2 (Public Review):

    Luongo et al. investigated the behavioural ability of 4 different species (macaque, mouse lemur, tree shrew and mouse) to segment figures defined by opponent motion, as well as different visual features from the background. With carefully designed experiments they convincingly make the point that figures that are not defined by textural elements (orientation or phase offsets, thus visible in a still frame) but purely by motion contrast, could not be detected by nonprimate species. Interestingly it appears to be particularly motion contrast, since pure motion - figures moving on a static background - could be discriminated better, at least by mice. This is highly interesting and surprising -- especially for a tree shrew, a diurnal, arboreal mammal, very closely related to primates and with a highly evolved visual system. It is also an important difference to take into account considering the multitude of studies on the mouse visual system in recent years.

    The authors additionally present neuronal activity in mice, from three different visual cortical areas recorded with both electrophysiology and imaging. Their conclusions are mostly supported by the data, but some aspects of the recordings and data analysis need to be clarified and extended.

    The main issues are outlined below roughly in order of importance:

    1. The most worrying aspect is that, if I interpret their figures correctly, their recordings seem not very stable and this may account for many of the differences across the visual conditions. The authors do not report in which order the different stimuli were shown, their supplemental movie, however, makes it seem as though they were not recorded fully interleaved, but potentially in a block design with all cross1 positions recorded first, before switching to cross2 positions and then on to iso... If I interpret Figure 6a correctly, each line is the same neuron and the gray scale shows the average response rate for each condition. Many of these neurons, however, show a large change in activity between the cross1 and the cross2 block. Much larger than the variability within each block that should be due to figure location and orientation tuning. If this interpretation is correct, this would mean that either there were significant brain state changes (they do have the mice on a ball but don't report whether and how much the animals were moving) between the blocks or their recordings could be unstable in time. It would be good to know whether similar dramatic changes in overall activity level occur between the blocks also in their imaging data.

    The same might be true for differences in the maps between conditions in figure 4. If indeed the recordings were in blocks and some cells stopped responding, this could explain the low map similarities. For example Cell 1 for the cross stimuli seems to be a simple ON cell, almost like their idealized cell in 3d. However, even though the exact texture in the RF and large parts of the surround for a large part of the locations is exactly identical for Cross1 and Iso2, as well as Cross2 and Iso1, the cells responses for both iso conditions appear to only be noise, or at least extremely noise dominated. Why would the cell not respond in a phase or luminance dependent manner here?

    This could either be due to very high surround suppression in the iso condition (which cannot be judged within condition normalization) or because the cell simply responded much weaker due to recording instability or brain state changes. Without any evidence of significant visual responses, enough spikes in each condition and a stable recording across all blocks, this data is not really interpretable. Instability or generally lower firing rates could easily also explain differences in their decoding accuracy.

    Similarly, it is very hard to judge the quality of their imaging data. They show no example field of views or calcium response traces and never directly compare this data to their electrophysiology data. It is mentioned that the imaging data is noisy and qualitatively similar, but some quantification could help convince the reader. Even if noisy, it is puzzling that the decoding accuracy should be so much worse with the imaging data: Even with ten times more included neurons, accuracy still does not even reach 30% of that of the ephys data. This could point to very poor data quality.

    We address the issue of stability of selectivity in our response to all reviewers above. Note that we wavered on whether to include the imaging data at all given the much better decoding accuracies from the electrophysiology data, and decided to include it for two main reasons:

    1. It qualitatively gives a very similar result, namely that there is a texture-dependent ability to resolve the position of given figures, suggesting that the rodent visual system is indeed better equipped at representing figure locations for the cross and iso stimuli than that nat stimulus.

    2. The correspondence on subsequent days between single cells and their corresponding spatial preference responses suggests that this is a stable and consistent preference represented by these neurons.

    The following verbiage has been added to the methods section

    Matching cells across days. Cells were tracked across days by first re-targeting to the same plane by eye such that the mean fluorescence image on a given day was matched to that on the previous day, with online visual feedback provided by a custom software plugin for Scanbox. […] This result points to the consistency of the spatial responses in the visual cortex as a substrate for inferring figure position.

    1. There is no information on the recorded units given. Were they spike sorted? Did they try to distinguish fast spiking and regular spiking units? What layers were they recorded from? It is well known that there are large laminar differences in the strength of figure ground modulation, as well as orientation tuned surround suppression. If most of their data would be from layer 5, perhaps a lack of clear figure modulation might not be that surprising. This could perhaps also be seen when comparing their electrophysiology data to the imaging data which is reportedly from layer 2/3, where most neurons show larger figure modulation/tuned surround suppression effects. There is, however, no report or discussion of differences in modulation between recording modalities.

    We used Kilosort (Pachitariu et al., 2016) for spike sorting of the data. The output of the automatic template-matching algorithm from Kilosort was visualized on Phy and then curated manually.

    We did not compute current source density. The 64 contacts on our probe spanned 1 mm, so we recorded cells throughout all layers of cortex. We didn’t focus on specific layer, as we didn’t find strong modulation by figure/ground or border ownership in any of our cells. We did not distinguish the fast and regular spike units.

    1. There is an apparent discrepancy between Figure 5d and i. How can their modulation index be around -0.1 for cross (Figure 5d) - which would correspond to on average ~20% weaker responses to a figure than to background, when their PSTH (5i) shows an almost 50% increase of figure over ground. This positive figure modulation has also been widely reported in the literature (Schnabel, Kirchberger, Keller). Are there different populations of cells going into these analyses?

    There was a mismatch in cells for plotting the F/G modulation index and time-course, since we previously set different criteria. Now we used the same criteria and replotted Figure 5d, e, g, h.

    1. In a similar vein, it is not immediately clear why the average map correlation would be bigger for random cell pairs (~0.2, Fig 3g) than for the different conditions of the same cell (~0, Fig 5b). Could this be due to differences in recording modality (imaging in 3g and ephys in 5b)?

    We suspect the reviewer is correct, namely, that the difference in recording modality accounts for these differences. The spatial mixing of signals inherent to calcium imaging can be problematic for the study of these figure ground and border ownership signals. Thus, it can be assumed that the non-zero mean observed in Fig 3g, is likely due to neuropil contamination, whereas Fig. 5 is purely ephys data and thus has no such confounds.

    1. The maps in Figure 4 should show the location of the RF, because they cannot be interpreted without knowledge of the RF center and size. For example cell 4 in the iso 1 condition could be a border cell, or could respond to the center of the figure. It is impossible to deduce without knowledge of the location of the RF.

    We have added the following clarification to the figure legend for Fig. 4a:

    “Overlaid on these example stimuli are grids representing the 128 possible figure positions and a green ellipse representing the ON receptive field. Note that this receptive field is the Gaussian fit from the sparse noise experiment.”

    We have also added the following clarification to the figure legend for Fig. 4b:

    “Please note that for all of these experiments the population receptive field was centered on the grid of positions.”

    1. It could help the reader to discuss the interpretation of the map correlations in Fig 5 a and b in more detail. My guess is that negatively correlated maps (within cross or iso condition) could come from highly orientation tuned neurons, whereas higher correlation values point to more generally figure/contextually modulated cells (within this condition). While the distribution is far from bimodal, this does not rule out a population of nicely figured modulated cells at the high end of the distribution. It might not be necessary at the level of V1 that the figure modulation be consistent across all textures. It would not be surprising, if orientation contrast-defined, phase contrast-defined and motion contrast-defined figures could be signalled to higher areas by discrete populations of V1 or even LM cells.

    We agree the reviewer’s interpretation of the neural findings is possible. But at least from the behavior, it seems unlikely that a motion contrast-defined figure is generated anywhere in the rodent brain.

    1. Some of the behavioural results warrant a little more explanation or discussion, as well. In Figure 2h, the mice seem significantly better on the static version of the iso task, than on the moving one. If statistically significant, this should be discussed. Is this because the static frame was maximally phase offset? Then the figure would indeed be better visible better (bigger phase contrast in more frames) than in the moving condition.

    Yes, indeed, in Figure 2h, the static frame was chosen with maximal positional displacement, and thus the figure can likely be seen better. We have added this clarification to the figure legend for Fig. 2h.

    Figure 2 and extended Figure 1c: why is the mouse lemur performing so poorly on average? It also appears to have biggest problems with the cross stimulus early on in training.

    The behavior experiments in the mouse lemur were carried out under an international collaboration and with substantially less exploratory experiments than was done for mouse, treeshrew, and macaque. For the mouse lemur, we simply went with a training regimen that we knew had worked efficiently for treeshrews and without any optimization of the procedure. Thus we would caution against over-interpreting the exact learning rates of the mouse lemurs and instead focus on the qualitative result that they could generalize for the Nat condition. This was a marked departure from the rodents and shrews and is the main finding we would like to convey. We suspect that with future optimizations of behavior shaping, training times and performances could likely both be improved.

    Tree shrews seem not to be able to memorize the textures as well as the mice do. Is this because of less deprivation/motivation? Or because of the bigger set of textures in training? This would make memorization harder and could thus lower their overall performance. The comparative aspects are very interesting but the absolute differences in performance could be discussed in more detail or explained better.

    Reviewer 1 raised a similar concern, please see our response above

    1. In Figure 7b, why wouldn't the explanation for the linear decodability in cross also hold for iso? There are phase offsets at the borders that simple cells should readily be able to resolve, just as in the case of orientation discontinuities. Could they make a surround phase model, similar to their surround orientation model, that could more readily capture the iso discontinuities?

    The reviewer is likely correct in their assertion that one could consider further hand tuning the model to account for the observed diversity in responses (namely, Cross > Iso > Nat for figure position decoding). We went directly to a DNN to model the data, since we thought this would be more powerful, given that the DNN features were not tuned to explain our neural data per se.

  2. eLife assessment

    Primates perceive and detect stimuli differently depending on the stimulus context in which they are embedded, and have a remarkable ability to understand the way in which objects and parts of objects are embedded in natural scenes (scene segmentation). An example of this is figure-ground segmentation, a well documented phenomenon resulting in a "pop-out" of a figure region and corresponding enhanced neural firing rates in visual areas. It is unknown whether mice show similar behavioral and neural signatures as primates. The present study suggests that mice show different segmentation behavior than primates, lacking texture-invariant segmentation of figures and corresponding neural correlates. This reveals a fundamental difference between primates and mice important for researchers working on these species and researchers studying scene segmentation. The findings are further interpreted in terms of neural network architectures (feedforward networks) and are relevant for this field too.

  3. Reviewer #1 (Public Review):

    The authors present a study of figure-ground segregation in different species. Figure-ground segregation is an important mechanism for the establishment of an accurate 3D model of the environment. The authors examine whether figure-ground segregation occurs in mice in a similar manner to that reported in primates and compare results to two other species (Tree shrews and mouse lemurs). They use both behavioral measures and electrophysiology/two-photon imaging to show that mice and tree shrews do not use opponent motion signals to segregate the visual scene into objects and background whereas mouse lemurs and macaque monkeys do. This information is of great importance for understanding to what extent the rodent visual system is a good model for primate vision and the use of multiple species is highly revealing for understanding the development of figure-ground segregation through evolution.

    The behavioral data is of high quality. I would add one caveat: it seems unfair to report that the tree shrews could not generalize the opponent motion stimulus as it seems they struggled to learn it in the first place. Their performance was below 60% on the training data and they weren't trained for many sessions in comparison to the mice. Perhaps with more training the tree-shrews might have attained higher performance on the textures and this would allow a more sensitive test of generalization. The authors should qualify their statements about the tree-shrews to reflect this issue.

  4. Reviewer #2 (Public Review):

    Luongo et al. investigated the behavioural ability of 4 different species (macaque, mouse lemur, tree shrew and mouse) to segment figures defined by opponent motion, as well as different visual features from the background. With carefully designed experiments they convincingly make the point that figures that are not defined by textural elements (orientation or phase offsets, thus visible in a still frame) but purely by motion contrast, could not be detected by non-primate species. Interestingly it appears to be particularly motion contrast, since pure motion - figures moving on a static background - could be discriminated better, at least by mice.

    This is highly interesting and surprising -- especially for a tree shrew, a diurnal, arboreal mammal, very closely related to primates and with a highly evolved visual system. It is also an important difference to take into account considering the multitude of studies on the mouse visual system in recent years.

    The authors additionally present neuronal activity in mice, from three different visual cortical areas recorded with both electrophysiology and imaging. Their conclusions are mostly supported by the data, but some aspects of the recordings and data analysis need to be clarified and extended.

    The main issues are outlined below roughly in order of importance:

    1. The most worrying aspect is that, if I interpret their figures correctly, their recordings seem not very stable and this may account for many of the differences across the visual conditions. The authors do not report in which order the different stimuli were shown, their supplemental movie, however, makes it seem as though they were not recorded fully interleaved, but potentially in a block design with all cross1 positions recorded first, before switching to cross2 positions and then on to iso... If I interpret Figure 6a correctly, each line is the same neuron and the gray scale shows the average response rate for each condition. Many of these neurons, however, show a large change in activity between the cross1 and the cross2 block. Much larger than the variability within each block that should be due to figure location and orientation tuning. If this interpretation is correct, this would mean that either there were significant brain state changes (they do have the mice on a ball but don't report whether and how much the animals were moving) between the blocks or their recordings could be unstable in time. It would be good to know whether similar dramatic changes in overall activity level occur between the blocks also in their imaging data.

    The same might be true for differences in the maps between conditions in figure 4. If indeed the recordings were in blocks and some cells stopped responding, this could explain the low map similarities. For example Cell 1 for the cross stimuli seems to be a simple ON cell, almost like their idealized cell in 3d. However, even though the exact texture in the RF and large parts of the surround for a large part of the locations is exactly identical for Cross1 and Iso2, as well as Cross2 and Iso1, the cells responses for both iso conditions appear to only be noise, or at least extremely noise dominated. Why would the cell not respond in a phase or luminance dependent manner here?

    This could either be due to very high surround suppression in the iso condition (which cannot be judged within condition normalization) or because the cell simply responded much weaker due to recording instability or brain state changes. Without any evidence of significant visual responses, enough spikes in each condition and a stable recording across all blocks, this data is not really interpretable. Instability or generally lower firing rates could easily also explain differences in their decoding accuracy.

    Similarly, it is very hard to judge the quality of their imaging data. They show no example field of views or calcium response traces and never directly compare this data to their electrophysiology data. It is mentioned that the imaging data is noisy and qualitatively similar, but some quantification could help convince the reader. Even if noisy, it is puzzling that the decoding accuracy should be so much worse with the imaging data: Even with ten times more included neurons, accuracy still does not even reach 30% of that of the ephys data. This could point to very poor data quality.

    2. There is no information on the recorded units given. Were they spike sorted? Did they try to distinguish fast spiking and regular spiking units? What layers were they recorded from? It is well known that there are large laminar differences in the strength of figure ground modulation, as well as orientation tuned surround suppression. If most of their data would be from layer 5, perhaps a lack of clear figure modulation might not be that surprising. This could perhaps also be seen when comparing their electrophysiology data to the imaging data which is reportedly from layer 2/3, where most neurons show larger figure modulation/tuned surround suppression effects. There is, however, no report or discussion of differences in modulation between recording modalities.

    3. There is an apparent discrepancy between Figure 5d and i. How can their modulation index be around -0.1 for cross (Figure 5d) - which would correspond to on average ~20% weaker responses to a figure than to background, when their PSTH (5i) shows an almost 50% increase of figure over ground. This positive figure modulation has also been widely reported in the literature (Schnabel, Kirchberger, Keller). Are there different populations of cells going into these analyses?

    4. In a similar vein, it is not immediately clear why the average map correlation would be bigger for random cell pairs (~0.2, Fig 3g) than for the different conditions of the same cell (~0, Fig 5b). Could this be due to differences in recording modality (imaging in 3g and ephys in 5b)?

    5. The maps in Figure 4 should show the location of the RF, because they cannot be interpreted without knowledge of the RF center and size. For example cell 4 in the iso 1 condition could be a border cell, or could respond to the center of the figure. It is impossible to deduce without knowledge of the location of the RF.

    6. It could help the reader to discuss the interpretation of the map correlations in Fig 5 a and b in more detail. My guess is that negatively correlated maps (within cross or iso condition) could come from highly orientation tuned neurons, whereas higher correlation values point to more generally figure/contextually modulated cells (within this condition). While the distribution is far from bimodal, this does not rule out a population of nicely figured modulated cells at the high end of the distribution. It might not be necessary at the level of V1 that the figure modulation be consistent across all textures. It would not be surprising, if orientation contrast-defined, phase contrast-defined and motion contrast-defined figures could be signalled to higher areas by discrete populations of V1 or even LM cells.

    7. Some of the behavioural results warrant a little more explanation or discussion, as well. In Figure 2h, the mice seem significantly better on the static version of the iso task, than on the moving one. If statistically significant, this should be discussed. Is this because the static frame was maximally phase offset? Then the figure would indeed be better visible better (bigger phase contrast in more frames) than in the moving condition.

    Figure 2 and extended Figure 1c: why is the mouse lemur performing so poorly on average? It also appears to have biggest problems with the cross stimulus early on in training.

    Tree shrews seem not to be able to memorize the textures as well as the mice do. Is this because of less deprivation/motivation? Or because of the bigger set of textures in training? This would make memorization harder and could thus lower their overall performance. The comparative aspects are very interesting but the absolute differences in performance could be discussed in more detail or explained better.

    8. In Figure 7b, why wouldn't the explanation for the linear decodability in cross also hold for iso? There are phase offsets at the borders that simple cells should readily be able to resolve, just as in the case of orientation discontinuities. Could they make a surround phase model, similar to their surround orientation model, that could more readily capture the iso discontinuities?