Modeling conditional distributions of neural and behavioral data with masked variational autoencoders

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Extracting the relationship between high-dimensional recordings of neural activity and complex behavior is a ubiquitous problem in systems neuroscience. Toward this goal, encoding and decoding models attempt to infer the conditional distribution of neural activity given behavior and vice versa, while dimensionality reduction techniques aim to extract interpretable low-dimensional representations. Variational autoencoders (VAEs) are flexible deep-learning models commonly used to infer low-dimensional embeddings of neural or behavioral data. However, it is challenging for VAEs to accurately model arbitrary conditional distributions, such as those encountered in neural encoding and decoding, and even more so simultaneously. Here, we present a VAE-based approach for accurately calculating such conditional distributions. We validate our approach on a task with known ground truth and demonstrate the applicability to high-dimensional behavioral time series by retrieving the conditional distributions over masked body parts of walking flies. Finally, we probabilistically decode motor trajectories from neural population activity in a monkey reach task and query the same VAE for the encoding distribution of neural activity given behavior. Our approach provides a unifying perspective on joint dimensionality reduction and learning conditional distributions of neural and behavioral data, which will allow for scaling common analyses in neuroscience to today's high-dimensional multi-modal datasets.

Article activity feed