Modeling spatial contrast sensitivity in responses of primate retinal ganglion cells to natural movies
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Retinal ganglion cells, the output neurons of the vertebrate retina, often display nonlinear summation of visual signals over their receptive fields. This creates sensitivity to spatial contrast, letting the cells respond to spatially structured visual stimuli even when no net change in overall illumination of the receptive field occurs. Yet, computational models of ganglion cell responses are often based on linear receptive fields, and typical nonlinear extensions, which separate receptive fields into nonlinearly combined subunits, are often cumbersome to fit to experimental data. Previous work has suggested to model spatial-contrast sensitivity in responses to flashed images by combining signals from the mean and variance of light intensity inside the receptive field. Here, we extend and adjust this spatial contrast model for application to spatiotemporal stimulation and explore its performance on spiking responses that we recorded from ganglion cells of marmosets under artificial and naturalistic movies. We show how the model can be fitted to experimental data and that it outperforms common models with linear spatial integration to different degrees for different types of ganglion cells. Finally, we use the model framework to infer the cells’ spatial scale of nonlinear spatial integration. Our work shows that the spatial contrast model can capture aspects of nonlinear spatial integration in the primate retina with only few free parameters. The model can be used to assess the cells’ functional properties under natural stimulation and provides a simple-to-obtain benchmark for comparison with more detailed nonlinear encoding models.
Author Summary
Our visual experience depends on the retina’s remarkable ability to detect light patterns and contrast in the world around us. Retinal ganglion cells, the output neurons of the retina, modulate their activity based on signals within small, specific regions of the visual scene, called their receptive fields. But many cells do not only encode overall brightness, summed linearly across the receptive field, but are also sensitive to local spatial contrast, that is, variations in brightness within the receptive field. Computational models that account for this nonlinear spatial integration exist, but require large amounts of data and are challenging to fit. We therefore developed the spatial contrast model, which takes a simple measure of light-intensity variations as an input, and tested it on measured responses of primate retinal ganglion cells to both artificial and naturalistic movies. The model substantially outperformed standard models with linear receptive fields, despite having only one additional tunable parameter. Furthermore, we used the model to investigate the spatial scale at which the cells integrate spatial contrast and found striking consistency across cell types. The spatial contrast model thus offers a practical tool for capturing retinal stimulus encoding and a simple-to-obtain benchmark for modeling nonlinear spatial integration.