Dissociating low-level visual features from high-level event structure in action segmentation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Event segmentation is a fundamental component of human perception and cognition. The growing field of event cognition studies how people decide where events occur in incoming sensory data, how "event boundaries" alter decision-making and memory processes, how events reveal themselves in neural activity, and how events may be represented within perception itself. That last point is critical — the representation of events in the first place is filtered through perception. But there is a key open question in the field: Is the perceptual representation of events a simple reflection of the fact that event boundaries are accompanied by large changes in low-level visual inputs (e.g., a sudden cut in a movie scene)? Or, do our higher-level internal models of events (e.g., "step one" versus "step two" of a tennis serve) shape how events are perceived? Here, across seven preregistered experiments, we attempt to dissociate the roles of lower-level visual features and higher-level semantic structures in perception of event boundaries. First, participants produced boundary labels by segmenting brief physical actions (e.g., kicking a ball). Then, separate groups of observers were asked to visually detect subtle disruptions in the actions at boundary versus non-boundary timepoints. The results consistently showed an interfering effect of event boundaries on the detection of disruptions. Critically, boundary effects were strongest when stimuli were presented in recognizable forms versus distorted forms that only preserved lower-level features. Thus, automatic and rapid perceptual segmentation of observed actions may be influenced by both sensory cues and our internal models of the world.

Article activity feed