Emergent brain-like representations in a goal-directed neural network model of visual search

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Visual search, the act of locating a target among distractors, is a fundamental cognitive behavior and a core paradigm for studying visual attention. While its behavioral properties are well characterized in humans and non-human primates, the underlying neural mechanisms remain largely unspecified. To address this gap, we developed a biologically aligned neural network model trained to perform visual search directly from pixels in natural scenes. This model exhibits strong generalization to novel scenes and objects, produces human-like scanpaths, and replicates previously known behavioral biases in humans. By analyzing the model’s internal representations, we found that it naturally develops a retinocentric cue-similarity map and prospective fixation signals–features that closely resemble neural activity in the primate fronto-parietal network. Beyond reproducing known behavior and neural signatures, the model makes testable predictions about the geometry and dynamics of internal representations underpinning cue-driven prioritization, fixation preferences, their perspective memories, and prospective plans. These findings offer a computational framework for understanding visual search and a roadmap for future neurophysiological and behavioral studies.

Article activity feed