sorry, we can't preview this file

...but you can still download S1_JSLHR-22-00218buchholz.xlsx
S1_JSLHR-22-00218buchholz.xlsx (29.22 kB)

A real-time speech understanding test (Buchholz et al., 2022)

Download (29.22 kB)
posted on 22.11.2022, 20:56 authored by Joerg M. Buchholz, Chris Davis, Julie Beadle, Jeesun Kim

Purpose: This study aimed to develop and test a measure of real-time continuous speech understanding to be used with natural dialogues.

Method: The measure was based on a category monitoring paradigm and employed five existing recordings of natural dialogues from which the different test categories and associated target words were derived. For each dialogue, a listener was first given a semantic category and asked to press a button as quickly as possible whenever they heard an instance of the category. We tested 63 younger adults, using five semantic categories (family, media, season, temperature, and travel) at three noise levels (in quiet, 0 dB, and −5 dB signal-to-noise ratio [SNR]). Performance was measured in terms of accuracy and response time.

Results: The results showed clear differences between the three noise conditions regardless of the semantic category. The peak of the response distribution was highest and earliest for the quiet condition and was reduced with decreasing SNR. The responses varied across categories, reflecting differences in the complexity of a given category or the typicality of the association between target words and their category. Broad categories and/or target words that were less directly associated with their category had decreased hit rates and increased response times.

Conclusion: The results were discussed in terms of the sensitivity (hit rate) of the performance measure, as well as whether it picked up higher level semantic, context, and discourse properties of the dialogues.

Supplemental Material S1. Target word properties.

Buchholz, J. M., Davis, C., Beadle, J., & Kim, J. (2022). Developing a real-time test to investigate conversational speech understanding. Journal of Speech, Language, and Hearing Research. Advance online publication.


The first author acknowledges the support from Sonova. The second and the corresponding authors were supported by the Australian Research Council (DP 200102188).