Temporal recurrence as a general mechanism to explain neural responses in the auditory system
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Computational models of neural processing in the auditory cortex usually ignore that neurons have an internal memory: they characterize their responses from simple convolutions with a finite temporal window of arbitrary duration. To circumvent this limitation, we propose here a new, simple and fully recurrent neural network (RNN) architecture incorporating cutting-edge computational blocks from the deep learning community and constituting the first attempt to model auditory responses with deep RNNs. We evaluated the ability of this approach to fit neural responses from 8 publicly available datasets, spanning 3 animal species and 6 auditory brain areas, representing the largest compilation of this kind. Our recurrent models significantly outperform previous methods and a new Transformerbased architecture of our design on this task, suggesting that temporal recurrence is the key to explain auditory responses. Finally, we developed a novel interpretation technique to reverse-engineer any pretrained model, regardless of their stateful or stateless nature. Largely inspired by works from explainable artificial intelligence (xAI), our method suggests that auditory neurons have much longer memory (several seconds) than indicated by current STRF techniques. Together, these results highly motivate the use of deep RNNs within computational models of sensory neurons, as protean building blocks capable of assuming any function.