Temporal evolution of Neural Codes: The Added Value of a Geometric Approach to Linear Coefficients
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Multivariate decoding analyses have become a cornerstone method in cognitive neuroscience. When applied to time-resolved brain imaging signals, they provide insights into the temporal dynamics of information processing in the brain. In particular, the temporal generalization (TG) method—where a decoder trained at one time point is tested on others—is commonly used to assess the stability of neural representations over time. However, TG performance can be ambiguous: distinct representational dynamics—such as sparse versus distributed activity, or scaling of activity versus recruitment of new units—can yield similar TG matrices. Moreover, even when generalization is strong, underlying neural representations may still be evolving in ways that TG alone fails to reveal. This ambiguity of performance profiles can mask meaningful changes in the geometry of neural representations. In this study, we use controlled simulations to demonstrate how different dynamic processes can produce indistinguishable TG profiles. To resolve these ambiguities, we propose a complementary approach based on the geometry of the learned linear coefficients. Specifically, we quantify the Rotation Angle θ between decision subspaces (with cosine similarity) and the Feature Density α (capturing whether feature contributions are distributed or sparse). Together, these measures complement TG analyses, revealing how neural representations evolve in space and time. Beyond time-resolved decoding, our approach applies broadly to any linear model, offering a geometric perspective on representational dynamics.