Frontal eye field neurons predict “anti-Bayesian” but not Bayesian judgments of visual stability across saccades

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Bayesian models, in which priors are used to optimally compensate for sensory uncertainty, have had wide-ranging success in explaining behavior across sensorimotor contexts. We recently reported, however, that humans and monkeys use a combination of Bayesian and non-Bayesian strategies when making categorical judgments of visual stability across saccades. While they used priors to compensate for internal, movement -driven sensory uncertainty, consistent with Bayesian predictions, they decreased their use of priors when faced with external, visual image uncertainty—an “anti-Bayesian” adjustment consistent with the use of a simple classifier. Here, we tested for neural correlates of these Bayesian and classifier-based strategies in the frontal eye field (FEF), a prefrontal region shown to be important for the perception of visual stability across saccades. We recorded from single FEF neurons while two rhesus macaques performed the internal (motor) and external (image) noise tasks in each session, interleaved trial by trial. FEF activity correlated with and predicted the anti-Bayesian, but not the Bayesian, behavior. These results suggest that the two computational strategies for visual stability are implemented by distinct neural circuits and provide a first step toward an integrated understanding of the computational and neural mechanisms underlying visual perception across saccades.

Article activity feed