Rapid Computation of High-Level Visual Surprise
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Predictive processing theories propose that the brain continuously generates expectations about incoming sensory information. Discrepancies between these predictions and actual inputs, sensory prediction errors, guide perceptual inference. A fundamental yet largely unresolved question is which stimulus features the brain predicts, and therefore, what kind of surprise drives neural responses. Here, we investigated this question using EEG and computational modelling based on deep neural networks (DNNs). Participants viewed object images whose identity was probabilistically predicted by preceding cues. We then quantified trial-by-trial surprise at both low-level (early DNN layers) and high-level (late DNN layers) visual feature representations. Results showed that stimulus-evoked responses around 200ms post-stimulus onset over parieto-occipital electrodes were increased by high-level, but not by low-level visual surprise. These findings demonstrate that high-level visual predictions are rapidly integrated into perceptual inference, suggesting that the brain’s predictive machinery is finely tuned to utilize expectations abstracted away from low-level sensory details to facilitate perception.