A robust receptive field code for optic flow detection and decomposition during self-motion
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (PREreview)
Abstract
Article activity feed
-
-
-
This Zenodo record is a permanently preserved version of a PREreview. You can view the complete PREreview at https://prereview.org/reviews/5797244.
bioRxiv preprint doi: https://doi.org/10.1101/2021.10.06.463330;
Zhang et al applied a previous developed method (Zhang and Arrenberg 2019) to identify and characterize the receptive fields of many motion-sensitive neurons in zebrafish. They found that some of these neurons behave as "matched filters" (detecting a single motion) and robustly encode translation-induced optic flows. The anatomical arrangement of these neurons in the brain seemed to correlate with their motion sensitivity. The authors also conducted behavior experiments to show that the fish is capable of decompose the translational and rotational components of a mixed optic flow.
Using visual information to infer …
This Zenodo record is a permanently preserved version of a PREreview. You can view the complete PREreview at https://prereview.org/reviews/5797244.
bioRxiv preprint doi: https://doi.org/10.1101/2021.10.06.463330;
Zhang et al applied a previous developed method (Zhang and Arrenberg 2019) to identify and characterize the receptive fields of many motion-sensitive neurons in zebrafish. They found that some of these neurons behave as "matched filters" (detecting a single motion) and robustly encode translation-induced optic flows. The anatomical arrangement of these neurons in the brain seemed to correlate with their motion sensitivity. The authors also conducted behavior experiments to show that the fish is capable of decompose the translational and rotational components of a mixed optic flow.
Using visual information to infer self-motion is a crucial task for animals' survival. The methods and findings in the manuscript would be of general interest to the motion-vision community. The manuscript is clearly written with detailed method description. Some comments and questions are listed below:
1. The contiguous motion noise (CMN) stimulus and the accompanied statistical methods developed by the authors are quite interesting and seem to work well identifying neurons with small receptive fields (correlation length of the stimulus). However, matched filters that encode self-motion tend to have large receptive field (looking at the background motion). Are we missing most of the "rotation neurons" (there seem to be 60 rotation complex RF but not much analysis on them)? Should we expect other translation neurons with much larger RF? Have the authors tried CMN with large correlation lengths?
2. Out of the 400 complex RF, are they all different given that they are from 7 fish? Can we estimate how many, say, translation sensitive motion neurons are in one fish (maybe also from Figure 4A)?
3. The fact that unimodal and bimodal neurons are mostly located in different parts of the brain suggests to me the bimodal neurons could be downstream (integrator) of the unimodal ones. In other words, could the unimodal neurons encode all the visual information seen in the complex ones? If this were the case, it should've been made clear before making comparisons between the complex and simple RFs (eg. Line222). To be clear, I do think it's instructional to applies the same analysis to both sets of neurons.
4. I'm not convinced by the "topographic arrangement" shown in Figure 4B. Do the author mean "retinotopy" or something much weaker? There is clearly a left-right separation in 4B, but that's not retinotopy, at least not clear from those neurons.
5. The distribution of translation directions in Figure 4A looks interesting. Could the authors comment on the ethological aspect? Do we expect this distribution given how the fish swim and navigate?
6. The behavior experiments are clever, but it feels a bit open-ended and could use some more analysis. In Figure 6E, the eye motion seems to be monotonic during the 20s stimulus. Is this by design? What happens if the stimulus is longer? For the T+R mixture case, the motion range of the right eye is much smaller and have the negative sign (comparing to the first 3 cases). Does this mean the right eye is trying to move leftwards and limited by the binocular region? If so, does it mean the fish is trying to follow the dominate (faster) stimulus on its left side. Naively I'd expect the fish to be able to disentangle translation from rotation to a certain degree. There will always be ambiguous cases. It'll be useful to know the fish's limit in decomposing mixed motions. Finally, in the abstract, the author mentioned "predicted decomposition in OKR and OMR". It sounds as if it's possible to predict how mixed motion is decomposed, which is not the case based on my understanding. The very last sentence in the abstract is also a too strong statement, I believe, since we don't really know the algorithm nor the circuit yet.
Some minor points
1. Figure 4C, the y-axis is confusing. Are 180 and -180 not the same?
2. Figure 5D, y-axis should have no unit
3. Figure 6C, for the swim trajectories, I think showing one example would be much clearer. Adding all the trajectories just make it hard to see the red one (the blue ones are intelligible).
-
-