Non-invasive mapping of the temporal processing hierarchy in the human visual cortex
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Vision is not instantaneous but evolves over time. However, simultaneously capturing the fine spatial details and the rapid temporal dynamics of visual processing remains a major challenge, resulting in a gap in our understanding of spatiotemporal dynamics. Here, we introduce a forward modeling technique that bridges high-spatial resolution fMRI with high-temporal resolution MEG, enabling us to non-invasively measure different levels of the visual hierarchy in humans and their involvement in visual processing with millisecond precision. Using fMRI, levels of the visual hierarchy were identified by measuring individuals’ population receptive fields and determining visual field maps. We predicted how much the activity patterns in each visual field map would contribute to brain responses measured with MEG. By comparing these predicted responses with the measured MEG responses, we assessed how much a given visual field map contributed to the measured MEG response, and, most importantly, when. We combined information from all MEG sensors and revealed a cortical processing hierarchy across visual field maps. We validated the method using cross-validations and demonstrated that the model generalized across MEG sensor types, stimulus shapes, and was robust to the number of visual field maps included in the model. We found that the primary visual cortex captured most of the variance in the MEG sensors and did so earlier in time than extrastriate regions. We also report a processing hierarchy across extra-striate visual field maps and clusters. We effectively combined the advantages of two very different neuroimaging techniques, opening avenues for answering research questions that require recordings with high spatiotemporal detail. By bridging traditionally separate areas of research, our approach helps close longstanding gaps in our understanding of brain function.
Author Summary
Vision doesn’t happen instantaneously, but unfolds over time. While we understand a lot about how the brain processes visual space, understanding how the brain processes information over both space and time is much more challenging. Vision happens incredibly fast, space and time are tightly linked, but current technology is limited in capturing these spatiotemporal dynamics.
Here, we developed a method that combines two non-invasive human brain imaging techniques using computational models: fMRI, which provides high spatial detail, and MEG, which captures millisecond-level timing. We use fMRI to model the detailed layout of the visual system, and predict how this spatial layout responds to specific visual stimuli and compare these predictions with MEG recordings. This allows us to pinpoint not just where visual processing happens, but also when .
Our results reveal a clear processing hierarchy: primary visual cortex responds first and explains most of the MEG signal, while higher-level visual areas activate later. We show that our method works across different stimuli, MEG sensor types, and brain areas, proving its robustness. This approach offers a new way to track how information flows through the living human brain, opening up new possibilities to study vision in both health and disease.