Item-based parsing of dynamic scenes in a combined attentional tracking and working memory task

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Human visual processing is limited – we can only track a few moving objects at a time, and store a few items in visual working memory (WM). A shared mechanism that may underlie these performance limits is how the visual system parses a scene into representational units. In the present study, we explored whether multiple-object tracking (MOT) and WM rely on a common item-based indexing mechanism. We measured the contralateral delay activity (CDA), an event-related slow wave that tracks load in an item-based manner, as subjects completed a combined WM and MOT task, concurrently tracking items and remembering visual information. In Experiment 1, participants tracked one or two moving discs without needing to remember the discs’ colors (track and ignore condition), or while also remembering the discs’ colors (two or four colors in total; track and remember condition). In Experiment 2, participants attended either two static discs or two moving discs, while remembering the discs’ colors (two or four colors). In both experiments, the CDA was largely determined by the tracking task – CDA amplitudes reflected the number of tracked discs and not the number of to-be-remembered colors. However, when the discs were static, the CDA amplitudes did reflect color load. We discuss this set of findings in relation to longstanding theories of visual cognition (FINSTs and object files), and the implications for cognitive models of representation of visual information – that how a scene is parsed into item-based representations is a key mechanism in the operation of WM.

Article activity feed