Unveiling the latent dynamics in social cognition with multi-agent inverse reinforcement learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Understanding the intentions and beliefs of others, a phenomenon known as “theory of mind”, is a crucial element in social behavior. These beliefs and perceptions are inherently subjective and latent, making them often unobservable for investigation. Social interactions further complicate the matter, as multiple agents can engage in recursive reasoning about each other’s strategies with increasing levels of cognitive hierarchy. While previous research has shown promise in understanding a single agent’s belief of values through inverse reinforcement learning, extending this to model interactions among multiple agents remains an open challenge due to the computational complexity. In this work, we adopted a probabilistic recursive modeling of cognitive levels and joint value decomposition to achieve efficient multi-agent inverse reinforcement learning (MAIRL) . We validated our method using simulations of a cooperative foraging task. Our algorithm revealed both the ground truth goal-directed value function and agents’ beliefs about their counter-parts’ strategies. When applied to human behavior in a cooperative hallway task, our method identified meaningful goal maps that evolved with task proficiency and an interaction map that is related to key states in the task without accessing to the task rules. Similarly, in a non-cooperative task performed by monkeys, we identified mutual predictions that correlated with the animals’ social hierarchy, highlighting the behavioral relevance of the latent beliefs we uncovered. Together, our findings demonstrate that MAIRL offers a new framework for uncovering human or animal beliefs in social behavior, thereby illuminating previously opaque aspects of social cognition.

Article activity feed