Controlling speech level and spectral shape (Fogerty et al., 2020)
figureposted on 2020-11-16, 22:21 authored by Daniel Fogerty, Rachel Madorskiy, Jayne B. Ahlstrom, Judy R. Dubno
Purpose: This study investigated methods used to simulate factors associated with reduced audibility, increased speech levels, and spectral shaping for aided older adults with hearing loss. Simulations provided to younger normal-hearing adults were used to investigate the effect of sensation level, speech presentation level, and spectral shape in comparison to older adults with hearing loss.
Method: Measures were assessed in quiet, steady-state noise, and speech-modulated noise. Older adults with hearing loss listened to speech that was spectrally shaped according to their hearing thresholds. Younger adults with normal hearing listened to speech that simulated the hearing-impaired group’s (a) reduced audibility, (b) increased speech levels, and (c) spectral shaping. Group comparisons were made based on speech recognition performance and masking release. Additionally, younger adults completed measures of listening effort and perceived speech quality to assess if differences across simulations in these outcome measures were similar to those for speech recognition.
Results: Across the various simulations employed, testing in the presence of a threshold matching noise best matched differences in speech recognition and masking release between younger and older adults. This result remained consistent across the other two outcome measures.
Conclusions: A combination of audibility, speech level, and spectral shape factors is required to simulate differences between listeners with normal and impaired hearing in recognition, listening effort, and perceived speech quality. The use of spectrally shaped and amplified speech in the presence of threshold matching noise best provided this simulated control.
Supplemental Material S1. Speech recognition scores for (A) Exp. 1 and (B) Exp. 2a. Scores in Exp. 2 for YNH simulated conditions were compared relative to OHI scores. To facilitate this comparison, solid and dashed lines indicate performance for older hearing-impaired (OHI) listeners in speech-shaped noise (SSN) and speech-modulated noise (SMN), respectively. Error bars indicate the standard error of the mean. Quiet condition scores are provided as a reference to indicate maximum expected performance.
Supplemental Material S2. Speech recognition results in (A) SSN, (B) SMN, and (C) masking release. Results are plotted for the YNH-shaped/OHI subject pairs (younger listeners in grey and older listeners in black). Subject pairings are ordered according to the OHI performance in SSN.
Supplemental Material S3. Results for individual OHI listeners on the the 1st and 2nd trial of the speech-in-noise testing from the larger project. Only results from the 2nd trial were used for comparison in Exp. 1.
Supplemental Material S4. Excluded subscales. (A) Results for the performance subscale measured in Exp. 2b for the NASA-TLX listening effort subjective ratings. (B) Results for the loudness subscale measures in Exp. 2c for measures of perceived speech quality.
Supplemental Material S5. (A) Listening effort ratings, (B) response times, and (C) speech quality ratings for young normal-hearing listeners. Effort and response time measures were collected in Exp. 2b and quality in Exp. 2c. To facilitate comparisons, scores are plotted so that up indicates better perception (i.e., less effort, faster response times, or higher quality). Error bars indicate the standard error of the mean. Quiet scores (white bars) are provided as a reference to indicate maximum performance.
Fogerty, D., Madorskiy, R., Ahlstrom, J. B., & Dubno, J. R. (2020). Comparing speech recognition for listeners with normal and impaired hearing: Simulations for controlling differences in speech levels and spectral shape. Journal of Speech, Language, and Hearing Research. Advance online publication. https://doi.org/10.1044/2020_JSLHR-20-00246
This work was supported, in part, by the National Institutes of Health, National Institute on Deafness and Other Communication Disorders (Grant R01 DC015465, awarded to DF, and R01 DC000184, awarded to JRD), and the National Center for Advancing Translational Sciences of the National Institutes of Health (Grant UL1 TR001450, awarded to MUSC). Some of the research were conducted in a facility constructed with support from Research Facilities Improvement Program (Grant C06 RR 014516, awarded to MUSC) from the National Institutes of Health/National Center for Research Resources.
Read the peer-reviewed publication
speechhearingaudiologyspeech-language pathologyrecognitionnormal hearingimpairedimpairmenthearing lossadultslevelspectral shapesimulatefactorsaudibilityhearing aidssensationpresentationquietsteady-state noisespeech-modulated noisethresholdcomparisonmaskinglistening effortqualitynoiseperceivedperceptionamplifiedage-related hearing lossageLinguistic Processes (incl. Speech Production and Comprehension)Acoustics and Acoustical Devices; Waves