Enhanced and idiosyncratic neural representations of personally typical scenes

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Previous research shows that the typicality of visual scenes (i.e., if they are good examples of a category) determines how easily they can be perceived and represented in the brain. However, the unique visual diets individuals are exposed to across their lifetimes should sculpt very personal notions of typicality. Here, we thus investigated whether scenes that are more typical to individual observers are more accurately perceived and represented in the brain. We used drawings to enable participants to describe typical scenes (e.g., a kitchen) and converted these drawings into 3D renders. These renders were used as stimuli in a scene categorization task, during which we recorded EEG. In line with previous findings, categorization was most accurate for renders resembling the typical scene drawings of individual participants. Our EEG analyses reveal two critical insights on how these individual differences emerge on the neural level: First, personally typical scenes yielded enhanced neural representations from around 200 ms after onset. Second, personally typical scenes were represented in idiosyncratic ways, with reduced dependence on high-level visual features. We interpret these findings in a predictive processing framework, where individual differences in internal models of scene categories formed through experience shape visual analysis in idiosyncratic ways.

Article activity feed