ASHA journals
Browse

Visual integration in speech perception (O’Hanlon et al., 2025)

Download (38.86 kB)
Version 2 2025-01-02, 20:06
Version 1 2024-12-02, 15:12
dataset
posted on 2025-01-02, 20:06 authored by Brandon O’Hanlon, Christopher J. Plack, Helen E. Nuttall

Purpose: In difficult listening conditions, the visual system assists with speech perception through lipreading. Stimulus onset asynchrony (SOA) is used to investigate the interaction between the two modalities in speech perception. Previous estimates of audiovisual benefit and SOA integration period differ widely. A limitation of previous research is a lack of consideration of visemes—categories of phonemes defined by similar lip movements when produced by a speaker—to ensure that selected phonemes are visually distinct. This study aimed to reassess the benefits of audiovisual lipreading to speech perception when different viseme categories are selected as stimuli and presented in noise. The study also aimed to investigate the effects of SOA on these stimuli.

Method: Sixty participants were tested online and presented with audio-only and audiovisual stimuli containing the speaker’s lip movements. The speech was presented either with or without noise and had six different SOAs (0, 200, 216.6, 233.3, 250, and 266.6 ms). Participants discriminated between speech syllables with button presses.

Results: The benefit of visual information was weaker than that in previous studies. There was a significant increase in reaction times as SOA was introduced, but there were no significant effects of SOA on accuracy. Furthermore, exploratory analyses suggest that the effect was not equal across viseme categories: “Ba” was more difficult to recognize than “ka” in noise.

Conclusion: In summary, the findings suggest that the contributions of audiovisual integration to speech processing are weaker when considering visemes but are not sufficient to identify a full integration period.

Supplemental Material S1. Generalized linear mixed-effects regression model (GLMER) equations, including the fixed-effects and random effects structure for each main hypothesis.

O’Hanlon, B., Plack, C. J., & Nuttall, H. E. (2025). Reassessing the benefits of audiovisual integration to speech perception and intelligibility. Journal of Speech, Language, and Hearing Research, 68(1), 26–39. https://doi.org/10.1044/2024_JSLHR-24-00162

Funding

This work was supported by the Economic and Social Research Council Training Grant ES/P000665/1 (awarded to O’Hanlon), the Manchester Biomedical Research Centre and the National Institute for Health and Care Research Grant NIHR203308 (awarded to Plack), and the Biotechnology and Biological Sciences Research Council New Investigator Grant BB/S008527/1 (awarded to Nuttall).

History

Usage metrics

    Journal of Speech, Language, and Hearing Research

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC