Human visual grouping based on within- and cross-area temporal correlations

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Perceptual organization in the human visual system involves neural mechanisms that spatially group and segment image areas based on local feature similarities, such as the temporal correlation of luminance changes. Successful segmentation models in computer vision, including graph-based algorithms and vision transformer, leverage similarity computations across all elements in an image, suggest that effective similarity-based grouping should rely on a global computational process. However, whether human vision employs a similarly global computation remains unclear due to the absence of appropriate methods for manipulating similarity matrices across multiple elements within a stimulus. To investigate how "temporal similarity structures" influence human visual segmentation, we developed a stimulus generation algorithm based on Vision Transformer. This algorithm independently controls within-area and cross-area similarities by adjusting the temporal correlation of luminance, color, and spatial phase attributes. To assess human segmentation performance with these generated texture stimuli, participants completed a temporal two-alternative forced-choice task, identifying which of two intervals contained a segmentable texture. The results showed that segmentation performance is significantly influenced by the configuration of both within- and cross-correlation across the elements, regardless of attribute type. Furthermore, human performance is closely aligned with predictions from a graph-based computational model, suggesting that human texture segmentation can be approximated by a global computational process that optimally integrates pairwise similarities across multiple elements.

Article activity feed