Learning generalizable representations through efficient coding

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Reinforcement learning (RL), an influential framework for understanding human learning, explains human behavior as aimed at maximizing reward. However, this approach offers limited insights into human generalization. Here, we propose refining classical RL by incorporating the efficient coding principle, which emphasizes maximizing reward using the simplest necessary representations. This refined framework predicts that intelligent agents, constrained by simpler representations, will inevitably develop the abilities to 1) distill environmental stimuli into fewer, abstract internal states; and 2) detect and utilize rewarding environmental features. Consequently, complex stimuli are mapped to compact representations, forming the basis for generalization. In two experiments, we demonstrate that, whereas classical RL models focusing on maximizing reward fail in generalization, an efficient coding model that learns compact representations achieves human-level generalization performance. We argue that efficient coding, rather than reward maximization, represents a more suitable computational goal for understanding human behavior, in terms of both learning and generalization.

Article activity feed