Aligning Statistical Models with Inference Goals in the Neuroscience of Language : A Dual-Dependency-Taxonomy
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The rise of large language models challenges neuroscience to reconsider how language is represented in the human brain and what aspects are uniquely human. Modeling language in the brain is complex, involving time-varying signals, structured symbolic representations, and interdependent features. Current models make trade-offs between variable independence, temporal continuity, and interpretability, shaping the inferences they support. The Dual-Dependency-Taxonomy (2D-taxonomy) addresses this by classifying models based on how they handle covariance and temporal dependencies. It distinguishes four classes—multivariable, multivariate, dynamical, and latent dynamical—each aligned with specific scientific goals. The 2D-taxonomy offers a principled framework for evaluating modeling strategies and helps clarify how different statistical choices influence what we can learn about language in the brain.