Aligning Statistical Models with Inference Goals in the Neuroscience of Language : A Dual-Dependency-Taxonomy

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Language unfolds over time and across multiple representational levels, from acoustics to meaning. Neural systems must therefore integrate temporal with representational structure, linking the evolving input with the hierarchical units it instantiates. These operations give rise to two fundamental statistical dependencies in linguistic and neural data: covariance, the instantaneous shared structure across features or recording sites, and temporal dependence, the influence of past states on the present state. As experiments become more naturalistic, neuroimaging and electrophysiological data increasingly express both forms of structure, producing correlated variables and continuous temporal dependencies that complicate interpretation. Because statistical models handle covariance and temporal dependence differently, they support distinct kinds of inference about language-brain mapping. We introduce a Dual-Dependency Taxonomy that classifies modelling approaches by the dependencies they represent. This framework clarifies the linguistic-neural relationships each model family can reveal, the questions they cannot address, and the methodological implications that follow.

Article activity feed