ASHA journals
Browse
JSLHR-20-00495Beadle_SuppS1.pdf (438.3 kB)

Age and the visual speech benefit in noise (Beadle et al., 2021)

Download (438.3 kB)
journal contribution
posted on 2021-11-11, 23:48 authored by Julie Beadle, Jeesun Kim, Chris Davis
Purpose: Listeners understand significantly more speech in noise when the talker’s face can be seen (visual speech) in comparison to an auditory-only baseline (a visual speech benefit). This study investigated whether the visual speech benefit is reduced when the correspondence between auditory and visual speech is uncertain and whether any reduction is affected by listener age (older vs. younger) and how severe the auditory signal is masked.
Method: Older and younger adults completed a speech recognition in noise task that included an auditory-only condition and four auditory–visual (AV) conditions in which one, two, four, or six silent talking face videos were presented. One face always matched the auditory signal; the other face(s) did not. Auditory speech was presented in noise at −6 and −1 dB signal-to-noise ratio (SNR).
Results: When the SNR was −6 dB, for both age groups, the standard-sized visual speech benefit reduced as more talking faces were presented. When the SNR was −1 dB, younger adults received the standard-sized visual speech benefit even when two talking faces were presented, whereas older adults did not.
Conclusions: The size of the visual speech benefit obtained by older adults was always smaller when AV correspondence was uncertain; this was not the case for younger adults. Difficulty establishing AV correspondence may be a factor that limits older adults’ speech recognition in noisy AV environments.

Supplemental Material S1. Descriptions of all measures, conditions, data exclusions, and methods used to determine sample sizes.

Beadle, J., Kim, J., & Davis, C. (2021). Effects of age and uncertainty on the visual speech benefit in noise. Journal of Speech, Language, and Hearing Research. Advance online publication. https://doi.org/10.1044/2021_JSLHR-20-00495

Funding

This research was supported by The HEARing Cooperative Research Centre and The MARCS Institute for Brain, Behaviour and Development.

History