Do Memories of Inferred Visual Representations Guide Low-level Perception?

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Prior knowledge shapes how we interpret and adapt to a dynamic environment. A crucial aspect of this process is the lifelong development of structured object representations, enabling meaningful and survival-relevant interactions with the external world. In this study, we investigated the extent to which memories of complex objects influence perception by refining visual details. To test this, we had participants hold in working memory Mooney images, a stimulus class that requires top-down processing to perceive hidden image structure. While holding a Mooney image in memory, participants performed a detection task, in which they had to detect an edge feature that appeared at selected locations of the illusory contours of the image. Participants completed this task twice, once before they were shown the hidden Mooney structure, and then after learning the hidden structure. In a signal detection framework, we assessed whether learning the hidden content altered participants’ sensitivity and response bias in detecting edge-feature targets. We found that, while holding object memories enhanced Mooney image disambiguation, it did not refine visual sensitivity. This dissociation — where categorical identification improves without corresponding perceptual refinement — suggests that memories of complex objects improve the overall understanding of ambiguous information independent of refining relevant visual details. These findings have important implications for the theory of recurrent processing, which has traditionally emphasised perceptual refinement in top-down feedback. Our results highlight how prior knowledge improves the perceived clarity of degraded visual information without necessarily improving precision in local feature detection.

Article activity feed