Value Representations Shape Learning Under Changing Goals
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
On March 9, 2016, at the Four Seasons Hotel in Seoul, AlphaGo defeated world champion Lee Sedol 4-1, marking a milestone in machine intelligence. But unlike humans, AlphaGo pursued a single fixed goal. Humans continually revise their goals and preferences, raising the question of how learned value representations support such flexibility. Previous work on goal-dependent learning has focused on goal selection or maintenance, largely neglecting how the structure of value representations affects learning when goals change. Here, using two multi-goal learning paradigms combined with computational modelling, we compare two learning architectures: reweighting existing values versus relearning them. We show that participants using feature-based representations adjusted more quickly and monitored decisions more effectively by reusing learned feature values. In contrast, participants relying on a single composite value were forced to relearn after each switch. These findings show that value representation structure shapes learning efficiency and behavioural flexibility in multi-goal environments.