ASHA journals
Browse
DOCUMENT
JSLHR-L-17-0210armstrong_SuppS1.pdf (307.3 kB)
DOCUMENT
JSLHR-L-17-0210armstrong_SuppS2.pdf (277.17 kB)
DOCUMENT
JSLHR-L-17-0210armstrong_SuppS3.pdf (187.36 kB)
.PDF
JSLHR-L-17-0210armstrong_SuppS4.pdf (137.83 kB)
DOCUMENT
JSLHR-L-17-0210armstrong_SuppS5.pdf (241.71 kB)
.PDF
JSLHR-L-17-0210armstrong_SuppS6.pdf (158.49 kB)
1/0
6 files

Predicting language outcomes (Armstrong et al., 2018)

dataset
posted on 2018-08-02, 17:13 authored by Rebecca Armstrong, Martyn Symons, James G. Scott, Wendy L. Arnott, David A. Copland, Katie L. McMahon, Andrew J. O. Whitehouse
Purpose: The current study aimed to compare traditional logistic regression models with machine learning algorithms to investigate the predictive ability of (a) communication performance at 3 years old on language outcomes at 10 years old and (b) broader developmental skills (motor, social, and adaptive) at 3 years old on language outcomes at 10 years old.
Method: Participants (N = 1,322) were drawn from the Western Australian Pregnancy Cohort (Raine) Study (Straker et al., 2017). A general developmental screener, the Infant Monitoring Questionnaire (Squires, Bricker, & Potter, 1990), was completed by caregivers at the 3-year follow-up. Language ability at 10 years old was assessed using the Clinical Evaluation of Language Fundamentals–Third Edition (Semel, Wiig, & Secord, 1995). Logistic regression models and interpretable machine learning algorithms were used to assess predictive abilities of early developmental milestones for later language outcomes.
Results: Overall, the findings showed that prediction accuracies were comparable between logistic regression and machine learning models using communication-only performance as well as performance on communication and broader developmental domains to predict language performance at 10 years old. Decision trees are incorporated to visually present these findings but must be interpreted with caution because of the poor accuracy of the models overall.
Conclusions: The current study provides preliminary evidence that machine learning algorithms provide equivalent predictive accuracy to traditional methods. Furthermore, the inclusion of broader developmental skills did not improve predictive capability. Assessment of language at more than 1 time point is necessary to ensure children whose language delays emerge later are identified and supported.

Supplemental Material S1. Flow diagram representing the participants included in the current study from the wider Raine cohort (Straker et al., 2017).

Supplemental Material S2. Characteristics of participants in the Raine cohort (Straker et al., 2017) who were “included” (n = 1,180) and “not included” (n = 1,481) in the current study.

Supplemental Material S3. Summary statistics for the typical (n = 1,040) and delayed (n = 476) language groups based on performance on the Infant Monitoring Questionnaire (IMQ, Squires, Bricker, & Potter, 1990) subscales at 3 years and the Clinical Evaluation of Language Fundamentals–Third Edition (CELF-3, Semel, Wiig, & Secord, 1995) at 10 years.

Supplemental Material S4. Results from the adjusted multivariate logistic regression with Infant Monitoring Questionnaire (IMQ, Squires, Bricker, & Potter, 1990) communication items at 3 years.

Supplemental Material S5. Results from multivariate logistic regression with all Infant Monitoring Questionnaire (IMQ, Squires, Bricker, & Potter, 1990) items at 3 years.

Supplemental Material S6. Predictive validity of the Infant Monitoring Questionnaire (IMQ, Squires, Bricker, & Potter, 1990) on language outcomes using default machine learning parameters (percentage accuracy using 10 × 10 fold cross-validation).

Armstrong, R., Symons, M., Scott, J. G., Arnott, W. L., Copland, D. A., McMahon, K. L., & Whitehouse, A. J. O. (2018). Predicting language difficulties in middle childhood from early developmental milestones: A comparison of traditional regression and machine learning techniques. Journal of Speech, Language, and Hearing Research. Advance online publication. https://doi.org/10.1044/2018_JSLHR-L-17-0210

Funding

The Raine Study is funded by the Raine Medical Research Foundation; the National Health and Medical Research Council; The University of Western Australia; The UWA Faculty of Medicine, Dentistry and Health Sciences; Curtin University; Edith Cowan University; Telethon Kids Institute; and Women and Infants Research Foundation. Andrew J. O. Whitehouse is funded by a senior research fellowship from the National Health and Medical Research Council (Grant 1077966). David A. Copland is funded by an ARC Future Fellowship (Grant FT100100976) and UQ Vice Chancellor’s Research & Teaching Fellowship.

History