Phonological representations of auditory and visual speech in the occipito-temporal cortex and beyond

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Speech is a multisensory signal that can be extracted from the voice and the lips. Previous studies suggested that occipital and temporal regions encode both auditory and visual speech features but their precise location and nature remain unclear. We characterized brain activity using fMRI (in male and female) to functionally and individually define bilateral Fusiform Face Areas (FFA), the left Visual Word Form Area (VWFA), an audio-visual speech region in the left Superior Temporal Sulcus (lSTS) and control regions in bilateral Para-hippocampal Place Areas (PPA). In these regions, we performed multivariate patterns classification of corresponding phonemes (speech sounds) and visemes (lip movements). We observed that the VWFA and lSTS represent phonological information from both vision and sounds. The multisensory nature of phonological representations appeared selective to the anterior portion of VWFA, as we found viseme but not phoneme representation in adjacent FFA or even posterior VWFA, while PPA did not encode phonology in any modality. Interestingly, cross-modal decoding revealed aligned phonological representations across the senses in lSTS, but not in VWFA. A whole-brain cross-modal searchlight analysis additionally revealed aligned audio-visual phonological representations in bilateral pSTS and left somato-motor cortex overlapping with oro-facial articulators. Altogether, our results demonstrate that auditory and visual phonology are represented in the anterior VWFA, extending its functional coding beyond orthography. The geometries of auditory and visual representations do not align in the VWFA as they do in the STS and left somato-motor cortex, suggesting distinct multisensory representations across a distributed phonological network.

Significance statement

Speech is a multisensory signal that can be extracted from the voice and the lips. Which brain regions encode both visual and auditory speech representations? We show that the Visual Word Form Area (VWFA) and the left Superior Temporal Sulcus (lSTS) both process phonological information from speech sounds and lip movements. However, while the lSTS aligns these representations across the senses, the VWFA does not, indicating different encoding mechanisms. These findings extend the functional role of the VWFA beyond reading. An additional whole-brain approach reveals shared representations in bilateral superior temporal cortex and left somato-motor cortex, indicating a distributed network for multisensory phonology.

Article activity feed