Unsupervised learning as a computational principle works in visual learning of natural scenes, but not of artificial stimuli
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The question of whether we learn exposed visual features remains a subject of controversy. A prevalent computational model suggests that visual features frequently exposed to observers in natural environments are likely to be learned. However, this unsupervised learning model appears to be contradicted by the significant body of experimental results with human participants that indicates visual perceptual learning (VPL) of visible task-irrelevant features does not occur with frequent exposure. Here, we demonstrate a resolution to this controversy with a new finding: Exposure to a dominant global orientation as task-irrelevant leads to VPL of the orientation, particularly when the orientation is derived from natural scene images, whereas VPL did not occur with artificial images even with matched distributions of local orientations and spatial frequencies to natural scene images. Further investigation revealed that this disparity arises from the presence of higher-order statistics derived from natural scene images—global structures such as correlations between different local orientation and spatial frequency channels. Moreover, behavioral and neuroimaging results indicate that the dominant orientation from these higher-order statistics undergoes less attentional suppression than that from artificial images, which may facilitate VPL. Our results contribute to resolving the controversy by affirming the validity of unsupervised learning models for natural scenes but not for artificial stimuli. They challenge the assumption that VPL occurring in everyday life can be predicted by laws governing VPL for conventionally used artificial stimuli.