The representational geometry of out‐of‐distribution generalization in primary visual cortex and artificial neural networks

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Humans and other animals display a remarkable ability to generalize learned knowledge to novel domains, a phenomenon known as out-of-distribution (OOD) generalization. This capability is thought to depend on the format of neural population representations; however, the specific geometrical properties that support OOD generalization and the learning objectives that give rise to them remain poorly understood. Here, we examine the OOD generalization of neural population representations of static grating orientations in the mouse visual cortex. We show that a decoder trained on neural responses within a restricted orientation domain can generalize to held-out orientation domains. The quality of generalization correlates with both the dimensionality and the curvature of the underlying neural representation manifold. Notably, similar OOD-generalizable geometry emerges in a deep neural network trained to predict the next frame in natural video sequences. These findings reveal the representational geometric properties underlying OOD generalization, and suggest that predictive learning objectives offer a promising approach for acquiring generalizable representation geometry.

Article activity feed