Automatic analysis of child speech (Knowles et al., 2018)
datasetposted on 19.09.2018, 17:40 authored by Thea Knowles, Meghan Clayards, Morgan Sonderegger
Purpose: Heterogeneous child speech was force-aligned to investigate whether (a) manipulating specific parameters could improve alignment accuracy and (b) forced alignment could be used to replicate published results on acoustic characteristics of /s/ production by children.
Method: In Part 1, child speech from 2 corpora was force-aligned with a trainable aligner (Prosodylab-Aligner) under different conditions that systematically manipulated input training data and the type of transcription used. Alignment accuracy was determined by comparing hand and automatic alignments as to how often they overlapped (%-Match) and absolute differences in duration and boundary placements. Using mixed-effects regression, accuracy was modeled as a function of alignment conditions, as well as segment and child age. In Part 2, forced alignments derived from a subset of the alignment conditions in Part 1 were used to extract spectral center of gravity of /s/ productions from young children. These findings were compared to published results that used manual alignments of the same data.
Results: Overall, the results of Part 1 demonstrated that using training data more similar to the data to be aligned as well as phonetic transcription led to improvements in alignment accuracy. Speech from older children was aligned more accurately than younger children. In Part 2, /s/ center of gravity extracted from force-aligned segments was found to diverge in the speech of male and female children, replicating the pattern found in previous work using manually aligned segments. This was true even for the least accurate forced alignment method.
Conclusions: Alignment accuracy of child speech can be improved by using more specific training and transcription. However, poor alignment accuracy was not found to impede acoustic analysis of /s/ produced by even very young children. Thus, forced alignment presents a useful tool for the analysis of child speech.
Supplemental Material S1. Summary of fixed-effects coefficients in the logistic regression models of %-Match between manually and force-aligned segments (Part 1).
Supplemental Material S2. Summary of fixed-effects coefficients in the linear regression models of absolute duration differences between manually and force-aligned segments (Part 1).
Supplemental Material S3. Summary of fixed-effects coefficients in the linear regression models of absolute onset differences between manually and force-aligned segments (Part 1).
Supplemental Material S4. Summary of fixed-effects coefficients in the linear regression models of absolute offset differences between manually and force-aligned segments (Part 1).
Supplemental Material S5. Summary of fixed-effects coefficients in the linear regression models of center of gravity differences between manually aligned, adult-trained force aligned, and child-trained force aligned conditions, as well as child age and sex (Part 2).
Knowles, T., Clayards, M., & Sonderegger, M. (2018). Examining factors influencing the viability of automatic acoustic analysis of child speech. Journal of Speech, Language, and Hearing Research, 61(10), 2487–2501. https://doi.org/10.1044/2018_JSLHR-S-17-0275
This research was supported by the McGill Collaborative Research Development Fund awarded to Meghan Clayards, Aparna Nadig, Kristine Onishi, Morgan Sonderegger, and Michael Wagner.
Read the peer-reviewed publication
speechacousticschildrenacoustic analysisautomaticalignmentaccuracyautomationcorpusinputtranscriptionmanualdurationboundaryspectralphoneticolder childrenyounger childrencenter of gravitymalefemalesegmentanalysisLinguistic Processes (incl. Speech Production and Comprehension)Acoustics and Acoustical Devices; Waves