Facial expression discrimination emerges from neural subspaces shared with detection and identity

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Understanding how the human brain decodes facial expressions remains a fundamental challenge, requiring computational models that tightly connect neural responses to behavior. Here, we demonstrate that rhesus macaques provide a unique and powerful animal model to uncover the neural computations behind human facial expression discrimination, bridging critical gaps between behavior, neural activity, and computational theory. Despite the challenges of establishing reliable behavioral paradigms in macaques, we developed a robust discrimination task spanning six emotional categories, yielding strong, image-by-image behavioral correspondence between macaques and humans. By systematically comparing artificial neural networks (ANNs) to macaque behavior and IT neural data, we found that traditional action unit–based models fail to capture image-level behavioral structure, while ANNs with IT-like internal representations outperform all others. Neural recordings showed that the specific IT population responses (70–100 ms) carried the strongest predictive power for facial expression discrimination, underscoring the primacy of feedforward codes in guiding behavior. Expression coding in IT was significantly shaped by face-selective neurons that also encoded identity. This convergence points to a shared functional subspace in IT, where stable (identity) and dynamic (expression) information coexist along overlapping dimensions. Such an architecture moves beyond the classical view of segregated pathways, revealing a general coding principle by which IT flexibly supports multiple socially relevant functions within a common representational geometry.

Article activity feed