Mechanisms of Neural Representation and Segregation of Multiple Spatially Separated Visual Stimuli

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Segregating objects from one another and the background is essential for scene understanding, object recognition, and visually guided action. In natural scenes, it is common to encounter spatially separated stimuli, such as distinct figure-ground regions, adjacent objects, and partial occlusions. Neurons in mid- and high-level visual cortex have large receptive fields (RFs) that often encompass multiple, spatially separated stimuli. It is unclear how neurons represent and segregate multiple stimuli within their RFs, and the role of spatial cues for such representation. To investigate these questions, we recorded neuronal responses in the middle temporal (MT) cortex of monkeys to spatially separated stimuli that moved simultaneously in two directions. We found that, across motion directions, response tuning to multiple stimuli was systematically biased toward the stimulus located at the more-preferred RF subregion of the neuron. The sign and magnitude of this spatial-location bias were correlated with the spatial preference of the neuron for single stimuli presented in isolation. We demonstrated that neuronal responses to multiple stimuli can be captured by an extended normalization model, which is a sum of the responses elicited by individual stimuli weighted by the spatial preference of the neuron. We also proposed a circuit implementation for the model. Our results indicate that visual neurons leverage spatial selectivity within their RFs to represent multiple spatially separated stimuli. The spatial-location bias in neuronal responses enables individual components of multiple stimuli to be represented by a population of neurons with different spatial preferences, providing a neural substrate for segregating multiple stimuli.

Significance Statement

Elucidating how neurons represent multiple visual stimuli is crucial for understanding the principles and mechanisms of neural coding. We found that the neuronal response in MT to spatially separated moving stimuli can be captured by the well-known normalization model, with an important new extension: the responses elicited by the individual stimulus components are combined and weighted by the neuron’s spatial preference for single stimuli within its receptive field. Consequently, the response of a neuron to multiple stimuli can be substantially biased toward the stimulus at the neuron’s preferred spatial location. Our results revealed a previously unknown coding strategy for representing and segregating multiple spatially separated stimuli. Our proposed circuit implementation provides insight into the neural mechanisms underlying spatial preference-weighted normalization.

Article activity feed