ASHA journals
Browse
1/1
2 files

Multiple channels in sign-supported speech (Mastrantuono et al., 2019)

dataset
posted on 2019-05-16, 20:55 authored by Eliana Mastrantuono, Michele Burigo, Isabel R. Rodríguez-Ortiz, David Saldaña
Purpose: The use of sign-supported speech (SSS) in the education of deaf students has been recently discussed in relation to its usefulness with deaf children using cochlear implants. To clarify the benefits of SSS for comprehension, 2 eye-tracking experiments aimed to detect the extent to which signs are actively processed in this mode of communication.
Method: Participants were 36 deaf adolescents, including cochlear implant users and native deaf signers. Experiment 1 attempted to shift observers’ foveal attention to the linguistic source in SSS from which most information is extracted, lip movements or signs, by magnifying the face area, thus modifying lip movements perceptual accessibility (magnified condition), and by constraining the visual field to either the face or the sign through a moving window paradigm (gaze contingent condition). Experiment 2 aimed to explore the reliance on signs in SSS by occasionally producing a mismatch between sign and speech. Participants were required to concentrate upon the orally transmitted message.
Results: In Experiment 1, analyses revealed a greater number of fixations toward the signs and a reduction in accuracy in the gaze contingent condition across all participants. Fixations toward signs were also increased in the magnified condition. In Experiment 2, results indicated less accuracy in the mismatching condition across all participants. Participants looked more at the sign when it was inconsistent with speech.
Conclusions: All participants, even those with residual hearing, rely on signs when attending SSS, either peripherally or through overt attention, depending on the perceptual conditions.

Supplemental Material S1. Tables including fixed effects of generalized linear mixed-effects regression (GLMER) analyses.

Supplemental Material S2. Background skills.

Mastrantuono, E., Burigo, M., Rodríguez-Ortiz, I. R., & Saldaña, D. (2019). The role of multiple articulatory channels of sign-supported speech revealed by visual processing. Journal of Speech, Language, and Hearing Research, 62, 1625–1656. https://doi.org/10.1044/2019_JSLHR-S-17-0433

Funding

This work, developed within the Language and Perception (LanPercept) project, was supported by the European Union’s Seventh Framework Programme for research, technological development, and demonstration under Grant Agreement 316748.

History