Encoding neural representations of time-continuous stimulus-response transformations in the human brain with advanced deep neural networks
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Human behavior arises from the continuous transformation of sensory input into goal-directed actions. While existing analytical methods often break time into discrete events, the stages and underlying representations involved in stimulus-response (S-R) transformations within time-continuous, complex environments remain only partially understood. Encoding models, combined with deep neural networks (DNNs) for feature generation, offer a promising framework for capturing these neural processes. While DNNs continue to improve in performance, it remains unclear whether these advances translate into more accurate models of brain activity. To address this, we collected fMRI data from participants (N = 23) as they played arcade-style video games and applied DNN-based encoding models to predict voxel-level brain activity. We compared the prediction accuracy of features from three DNNs at different stages of development within our encoding model. We show that the most advanced DNN provides the most predictive feature space for neural responses, while also exhibiting a closer hierarchical alignment between its internal representations and the brain’s functional organization. These results enable a more fine-grained characterization of time-continuous S-R transformations in high-dimensional visuomotor tasks, progressing along the dorsal visual stream and extending into motor-related regions. This approach highlights the potential of machine learning to advance cognitive neuroscience by enhancing the ecological validity of experimental tasks.