Semantic Meaning Shapes Feature Binding in Visual Long-Term Memory: A Graded Account of Object Memory

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Studies of visual long-term memory (VLTM) increasingly focus on how object features are stored and interrelated, with object-based accounts proposing integrated representations and feature-based views emphasizing independent feature storage and retrieval. More recent hybrid models suggest graded levels of interdependency, yet the role of semantic meaning in shaping these dependencies has remained underexplored. The present study advances a graded perspective by examining how semantic content, manipulated at the individual feature level, modulates relations among visual details. Specifically, we investigated whether color, when meaningful to an object’s identity, enhances memory for arbitrary, surface-level properties such as location or size. A preliminary survey established which object-color pairings were perceived as semantically meaningful versus meaningless. Participants showed superior memory for meaningful colors (Experiment 1a), regardless of verbal suppression (Experiment 1b), validating our manipulation. We then used conditional dependency analyses to assess whether memory for location (Experiment 2a) or size (Experiment 2b) was contingent on memory for color. Across both immediate and delayed tests, we found significant feature interdependency in both color conditions, but critically, dependency was greater when color was meaningful (although only in the delayed test for size). These findings support continuous accounts of VLTM, showing that memory representations are neither wholly integrated nor fully fragmented. Instead, semantic information strengthens both memory for individual features and their binding to otherwise arbitrary details. By manipulating semantic value at the feature level, our results demonstrate how conceptual meaning shapes the coherence of VLTM representations, providing new evidence for graded models of object memory.

Article activity feed