Modeling spatial contrast sensitivity in responses of primate retinal ganglion cells to natural movies

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Retinal ganglion cells, the output neurons of the vertebrate retina, often display nonlinear summation of visual signals over their receptive fields. This creates sensitivity to spatial contrast, letting the cells respond to spatially structured visual stimuli, such as a contrast-reversing grating, even when no net change in overall illumination of the receptive field occurs. Yet, computational models of ganglion cell responses are often based on linear receptive fields. Nonlinear extensions, on the other hand, such as subunit models, which separate receptive fields into smaller, nonlinearly combined subfields, are often cumbersome to fit to experimental data, in particular when natural stimuli are considered. Previous work in the salamander retina has shown that sensitivity to spatial contrast in response to flashed images can be partly captured by a model that combines signals from the mean and variance of luminance signals inside the receptive field. Here, we extend this spatial contrast model for application to spatiotemporal stimulation and explore its performance on spiking responses that we recorded from retinas of marmosets under artificial and natural movies. We show how the model can be fitted to experimental data and that it outperforms common models with linear spatial integration, in particular for parasol ganglion cells. Finally, we use the model framework to infer the cells’ spatial scale of nonlinear spatial integration and contrast sensitivity. Our work shows that the spatial contrast model provides a simple approach to capturing aspects of nonlinear spatial integration with only few free parameters, which can be used to assess the cells’ functional properties under natural stimulation and which provides a simple-to-obtain benchmark for comparison with more detailed nonlinear encoding models.

Article activity feed