Spatiotemporal Dynamics of Invariant Face Representations in the Human Brain

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The human brain can effortlessly extract a familiar face’s age, gender, and identity despite dramatic changes in appearance, such as head orientation, lighting, or expression. Yet, the spatiotemporal dynamics underlying this ability, and how they depend on task demands, remain unclear. Here, we used multivariate decoding of magnetoencephalography (MEG) responses and source localization to characterize the emergence of invariant face representations. Human participants viewed natural images of highly familiar celebrities that systematically varied in viewpoint, gender, and age, while performing a one-back task on the identity or the image. Time-resolved decoding revealed that identity information emerged rapidly and became increasingly invariant to viewpoint over time. We observed a temporal hierarchy: view-specific identity information appeared at 64 ms, followed by mirror-invariant representations at 75 ms and fully view-invariant identity at 89 ms. Identity-invariant age and gender information emerged around the same time as view-invariant identity. Task demands modulated only late-stage identity and gender representations, suggesting that early face processing is predominantly feedforward. Source localization at peak decoding times showed consistent involvement of the occipital face area (OFA) and fusiform face area (FFA), with stronger identity and age signals than gender. Our findings reveal the spatiotemporal dynamics by which the brain extracts view-invariant identity from familiar faces, suggest that age and gender are processed in parallel, and show that task demands modulate later processing stages. Together, these results offer new constraints on computational models of face perception.

Article activity feed