deepGOLSA: Goal-directed planning with subgoal reduction models human brain activity

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Goal-directed planning presents a challenge for classical Reinforcement Learning (RL) algorithms due to the vastness of combinatorial state and goal spaces. Humans and animals adapt to complex environments especially with diverse, non-stationary objectives, often employing intermediate goals for long-horizon tasks. Here we propose a novel method for effectively deriving subgoals from arbitrary and distant original goals, called the deep Goal Oriented Learning and Selection of Action, or deepGOLSA model. Using a loop-removal technique, the method distills high-quality subgoals from a replay buffer, all without the need of prior environmental knowledge. This generalizable and scalable solution applies across different domains. Simulations show that the model can be integrated into existing RL frameworks like Deep Q Networks and Soft Actor-Critic models. DeepGOLSA accelerates performance in both discrete and continuous tasks, such as grid world navigation and robotic arm manipulation, relative to existing RL models. Moreover, the subgoal reduction mechanism, even without iterative training, outperforms its integrated deep RL counterparts when solving a navigation task.

The goal reduction mechanism also models human problem-solving. Comparing the model’s performance and activation with human behavior and fMRI data in a treasure hunting task, we found matching representational patterns between specific deepGOLSA model components and corresponding human brain areas, particularly the vmPFC and basal ganglia. The results suggest a new computational framework for examining goal-directed behaviors in humans.

Article activity feed