Deep Sound Synthesis Reveals Novel Category-Defining Sound Features in the Human Auditory Cortex
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The human auditory system extracts meaning from the environment by transforming acoustic input signals into semantic categories. Specific acoustic features give rise to distinct categorical percepts, such as speech or music, and to spatially distinct preferential responses in the auditory cortex. These responses contain category-relevant information, yet their representational level and role within the acoustic-to-semantic transformation process remain unclear. We combined neuroimaging, a deep neural network, a brain-based sound synthesis, and psychophysics to identify the sound features that are internally represented in the speech- and music-selective human auditory cortex and test their functional role in sound categorization. We found that the synthetized sounds exhibit unnatural features distinct from those normally associated with speech and music, yet they elicit categorical cortical and behavioral responses resembling those of natural speech and music. Our findings provide new insights into the fundamental sound features underlying speech and music categorization in the human auditory cortex.