Encoding and combining naturalistic motion cues in the ferret higher visual cortex
Discuss this preprint
Start a discussionListed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Natural visual scenes contain rich flows of pattern motion that vary not only in orientation but also in spatial sizes and temporal rhythms. To properly interpret the motion, the brain must extract individual features and integrate them into coherent parts, and sometimes also segregate these parts from each other. Here, we performed single-neuron recordings in the motion-sensitive higher-order visual cortex (PMLS) of awake ferrets to investigate how complex motion signals are encoded and combined. We presented motion clouds—naturalistic stimuli with parametrically controlled spatiotemporal frequency content—and found that motion features were encoded in a temporally ordered sequence: orientation and spatial frequency emerged within 120 ms after stimulus onset, while temporal frequency and direction followed at later latencies. Time-resolved decoding revealed that this selectivity evolved dynamically within neurons and was distributed across the population. To probe motion integration, we introduced compound motion clouds composed of two or three localized frequency components. Neuronal responses were well explained by a linear pooling model, suggesting a simple summation mechanism of the individual components. However, a distinct subset of neurons exhibited late responses sensitive to changes in speed content despite matched marginals, consistent with receptive fields differentiating along the speed gradient. Together, we have uncovered a structured and distributed code for motion in high-level visual cortex, and provide mechanistic insights into how the brain parses complex motion in natural scenes.