Decoding the neural stages from action and object recognition to mentalizing
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Higher-level action interpretation, such as inferring underlying intentions and predicting future actions, requires the integration of conceptual action information (e.g. "opening") with semantic knowledge about persons and objects (e.g. "my friend Anna", "pizza box"). However, how the neural systems for action and object recognition and memory interact with each other to form the basis for inferring higher-level mental states remains unclear. Here we use fMRI-based crossmodal multiple regression representational similarity analysis in human female and male participants to elucidate the processing stages from basic action and object recognition to mentalizing. We show that inferring intentions from observed actions or written sentences involves a modality-general network of lateral and medial frontoparietal and temporal brain regions associated with conceptual action and object representation and mentalizing. The representational profiles in these regions are explained by models capturing different types of conceptual information, revealing distinct but partially overlapping networks for action, object, and mental state representation. There was no strict separation of networks for action, object, and mental state representations, arguing against a sequential bottom-up hierarchy from action and object understanding pathways to the mentalizing network. Rather, left-hemispheric regions, specifically ventrolateral prefrontal, inferior parietal and anterior lateral occipitotemporal cortex, showed strong representational overlap, pointing towards a core network for making meaning of action-object structures at a conceptual level. We argue that this core network represents a distributional semantic hub between classic networks for action and object understanding and the mentalizing network.
Significance Statement
How does the human brain integrate information from actions, e.g., "open a pizza box", to understand the actions’ underlying intentions? To do so, the brain needs to combine information from different neural networks—for action and object recognition—and pass them to the mentalizing network for inferring intentions, such as "satisfying hunger". We characterize the interplay of networks using fMRI-based crossmodal multivariate analyses and find that a left-lateralized core network in inferior frontal and parietal cortex and lateral occipitotemporal cortex represents all critical ingredients—conceptual action and object information as well as higher-level mental state representation simultaneously in an overlapping manner. This suggests that this core network is essential for semantic interpretation and functions as bridge between recognition pathways and the mentalizing system.