Convolutional neural network models of the primate retina reveal adaptation to natural stimulus statistics

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The diverse nature of visual environments demands that the retina, the first stage of the visual system, encodes a vast range of stimuli with various statistics. The retina adapts its computations to some specific features of the input, such as brightness, contrast or motion. However, it is less clear whether it also adapts to the statistics of natural scenes compared to white noise, the latter of which is often used to infer models of retinal computation. To address this question, we analyzed neural activity of retinal ganglion cells (RGCs) in response to both white noise and naturalistic movie stimuli. We performed a systematic comparative analysis of traditional linear-nonlinear (LN) and recent convolutional neural network (CNN) models and tested their generalization across stimulus domains. We found that no model type trained on one stimulus ensemble was able to accurately predict neural activity on the other, suggesting that retinal processing depends on the stimulus statistics. Under white noise stimulation, the receptive fields of the neurons were mostly lowpass, while under natural image statistics they exhibited a more pronounced surround resembling the whitening filters predicted by efficient coding. Together, these results suggest that retinal processing dynamically adapts to the stimulus statistics.

Article activity feed