Influence of speech-in-noise perception, gender, and age on lipreading ability for monosyllabic words

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Lipreading is an important function that supports communication ability in people with deafness or hearing loss, and there is recent interest in developing lipreading training programs as a strategy to rehabilitate their speech perception. However, the success of these programs is mixed, potentially due to individual differences in participant characteristics. Here we conducted an online cross-sectional study to examine how hearing ability, gender, and age shape lipreading ability. Forty participants aged 41–75 viewed short, silent video clips of a woman speaking a monosyllabic word and typed the word they perceived into a response box. In addition, we collected demographic information and speech-in-noise perception scores. Lipreading performance was scored at the word level (i.e., lexical level), and at sublexical levels for individual phonemes and visually identical homophemes (i.e., visemes) of the target words. Women correctly reported more phonemes and visemes per word than men, but no gender effect was found at the whole-word level. There was an interaction between age and speech-in-noise for words and phonemes. Lipreading performance was best for comparatively younger participants with worse speech-in-noise performance, but this effect reduced with increasing age. This suggests evidence for a compensatory reliance on visual speech that declines in older adults. Overall, our results suggest that gender, age, and speech-in-noise perception shape lipreading ability, although effects may differ depending on analysis at lexical and sublexical levels.

Article activity feed