Goal-free sensory encoding and learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article


Investigating how goals impact the way we explore, represent, and interact with the world is vital for understanding human cognition. In their insightful review, Molinaro & Collins (2023) redefine the conventional role of goals in computational theories of learning and decision-making, arguing that in reinforcement learning frameworks, traditionally ‘fixed’ elements (e.g., states, actions, and rewards) are in fact intricately linked to and influenced by an agent’s current goals. In support of their claim that goals are dynamic elements that actively shape information processing altering an agent’s state, they draw on fMRI work showing that neural representations in prefrontal cortex vary systematically when participants imagine using the same object to achieve different goals (Castegnetti et al., 2021). These and other findings suggest that goals do not only influence high-level cognitive processes, but can also modulate the encoding of sensory information, including representations in early sensory areas (Schaffner et al., 2023). Here we provide nuance to this perspective by highlighting the obligatory and largely automatic nature of early sensory processing, wherein evoked responses to complex stimuli (e.g., faces, objects) encode visual input in a manner that is largely independent of an agent’s goal state. This caveat arises out of the time-resolved neural decoding literature that suggests that while the observer’s task undoubtedly guides attention and goal states, its influence on the early stages of visual processing is comparatively subtle.

Article activity feed