Representational Geometries of Perception and Working Memory

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study compared the representational geometry of visual perception and visual working memory using human 7T fMRI. Observers viewed a face-scene blended sample, attended to either the face or scene, and judged whether the attended aspect matched a subsequent test image. We found that the stimulus coding dimension distinguishing faces and scenes was rotated when perceptual content entered working memory, in both visual and association areas. In these regions, exclusive combinations (i.e., perceived face and memorized scene vs. perceived scene and memorized face) were linearly decodable, indicating that perception and working memory occupy distinct subspaces. Such high-dimensional, flexible coding may help prevent confusion between perception and memory. In contrast, robust cross-decoding across perception and working memory revealed factorized representational formats, consistent with generalizability of their contents. The balance between flexibility and abstraction varied along the cortical hierarchy: early sensory regions emphasized flexibility, whereas transmodal regions showed greater abstraction.

Highlights

  • Representational geometry rotates when perceptual content enters working memory.

  • Such rotation expands the representational dimensionality for flexible coding.

  • Perceptual and mnemonic representations are still generalizable across each other.

  • Flexibility–generalization tradeoff aligns with the unimodal–transmodal gradient.

Article activity feed