Neural signatures of automatic letter–speech sound integration in literate adults
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Automaticity in decoding print is crucial for fluent reading. This process relies on associative memories between letters and speech sounds (LSS) that are overlearned through years of reading practice. While previous neuroimaging studies have identified neural correlates of LSS integration across different stages of reading development, the specific neural signatures underlying automatic LSS integration remain unclear. In the present study, we aimed to isolate neural components specifically associated with automatic LSS integration in literate adults. To this end, we developed an artificial script training paradigm in which adult native Finnish speakers were taught to associate unfamiliar foreign letters with familiar Finnish speech sounds. Using magnetoencephalography (MEG), we directly compared the audiovisual processing of newly learned and overlearned LSS associations within the same task, one day after training. Event-related fields (ERFs) and multivariate decoding revealed largely shared neural circuits of audiovisual integration for both types of LSS associations, as evidenced by multisensory interaction and congruency effects. Interestingly, the processing of congruent overlearned audiovisual associations uniquely recruited brain activity in the left parietal cortex during the 235-475 ms time window. Furthermore, temporal generalization analysis of the congruency effects revealed that while both newly learned and overlearned audiovisual associations engaged common neural mechanisms, the newly learned associations were processed systematically more slowly by a few hundred milliseconds. Our study identified the spatiotemporal neural signatures underlying automatized LSS processing, offering insights into neural markers that may help identify levels of reading proficiency.