posted on 2019-04-04, 22:24authored byJonathan H. Venezia, Allison-Graham Martin, Gregory Hickok, Virginia M. Richards
Purpose: Age-related sensorineural hearing loss can dramatically affect speech recognition performance due to reduced audibility and suprathreshold distortion of spectrotemporal information. Normal aging produces changes within the central auditory system that impose further distortions. The goal of this study was to characterize the effects of aging and hearing loss on perceptual representations of speech.
Method: We asked whether speech intelligibility is supported by different patterns of spectrotemporal modulations (STMs) in older listeners compared to young normal-hearing listeners. We recruited 3 groups of participants: 20 older hearing-impaired (OHI) listeners, 19 age-matched normal-hearing listeners, and 10 young normal-hearing (YNH) listeners. Listeners performed a speech recognition task in which randomly selected regions of the speech STM spectrum were revealed from trial to trial. The overall amount of STM information was varied using an up–down staircase to hold performance at 50% correct. Ordinal regression was used to estimate weights showing which regions of the STM spectrum were associated with good performance (a “classification image” or CImg).
Results: The results indicated that (a) large-scale CImg patterns did not differ between the 3 groups; (b) weights in a small region of the CImg decreased systematically as hearing loss increased; (c) CImgs were also nonsystematically distorted in OHI listeners, and the magnitude of this distortion predicted speech recognition performance even after accounting for audibility; and (d) YNH listeners performed better overall than the older groups.
Conclusion: We conclude that OHI/older normal-hearing listeners rely on the same speech STMs as YNH listeners but encode this information less efficiently.
Supplemental Observer Simulation: We performed a simulation in which a simulated listener performed the same experimental procedure as the real listeners at each of several values for average number of bubbles at threshold (50-150 in steps of 10, the range observed for real OHI listeners).
Supplemental Material S1–S3. Audio. Examples of auditory bubbles stimuli. Three sentences with varying degrees of intelligibility are provided. For each sentence, a filter with 50 bubbles was applied to produce the stimulus. In each audio file, the bubbles-filtered sentence is followed by a clear, unprocessed version of the same sentence to allow comparison.
Venezia, J. H., Martin, A.-G., Hickok, G., & Richards, V. M. (2019). Identification of the spectrotemporal modulations that support speech intelligibility in hearing-impaired and normal-hearing listeners. Journal of Speech, Language, and Hearing Research, 62, 1051–1067. https://doi.org/10.1044/2018_JSLHR-H-18-0045
Funding
This work was supported by an American Speech-Language- Hearing Foundation New Investigators Research Grant to J. H. V. and a UC Irvine Undergraduate Research Opportunities Program award to A.-G. M. Research reported in this publication was supported by the National Institute on Deafness and Other Communication Disorders under Award R21 DC013406 (Multiple principal investigators: V. M. R. and Y. Shen), the National Center for Research Resources and the National Center for Advancing Translational Sciences under Award UL1 TR001414, and the UC Irvine Alzheimer’s Disease Research Center under Award P50 AG016573, all from the National Institutes of Health.