Predicting language outcomes (Armstrong et al., 2018)
datasetposted on 02.08.2018 by Rebecca Armstrong, Martyn Symons, James G. Scott, Wendy L. Arnott, David A. Copland, Katie L. McMahon, Andrew J. O. Whitehouse
Datasets usually provide raw data for analysis. This raw data often comes in spreadsheet form, but can be any collection of data, on which analysis can be performed.
Purpose: The current study aimed to compare traditional logistic regression models with machine learning algorithms to investigate the predictive ability of (a) communication performance at 3 years old on language outcomes at 10 years old and (b) broader developmental skills (motor, social, and adaptive) at 3 years old on language outcomes at 10 years old.
Method: Participants (N = 1,322) were drawn from the Western Australian Pregnancy Cohort (Raine) Study (Straker et al., 2017). A general developmental screener, the Infant Monitoring Questionnaire (Squires, Bricker, & Potter, 1990), was completed by caregivers at the 3-year follow-up. Language ability at 10 years old was assessed using the Clinical Evaluation of Language Fundamentals–Third Edition (Semel, Wiig, & Secord, 1995). Logistic regression models and interpretable machine learning algorithms were used to assess predictive abilities of early developmental milestones for later language outcomes.
Results: Overall, the findings showed that prediction accuracies were comparable between logistic regression and machine learning models using communication-only performance as well as performance on communication and broader developmental domains to predict language performance at 10 years old. Decision trees are incorporated to visually present these findings but must be interpreted with caution because of the poor accuracy of the models overall.
Conclusions: The current study provides preliminary evidence that machine learning algorithms provide equivalent predictive accuracy to traditional methods. Furthermore, the inclusion of broader developmental skills did not improve predictive capability. Assessment of language at more than 1 time point is necessary to ensure children whose language delays emerge later are identified and supported.
Supplemental Material S1. Flow diagram representing the participants included in the current study from the wider Raine cohort (Straker et al., 2017).
Supplemental Material S2. Characteristics of participants in the Raine cohort (Straker et al., 2017) who were “included” (n = 1,180) and “not included” (n = 1,481) in the current study.
Supplemental Material S3. Summary statistics for the typical (n = 1,040) and delayed (n = 476) language groups based on performance on the Infant Monitoring Questionnaire (IMQ, Squires, Bricker, & Potter, 1990) subscales at 3 years and the Clinical Evaluation of Language Fundamentals–Third Edition (CELF-3, Semel, Wiig, & Secord, 1995) at 10 years.
Supplemental Material S4. Results from the adjusted multivariate logistic regression with Infant Monitoring Questionnaire (IMQ, Squires, Bricker, & Potter, 1990) communication items at 3 years.
Supplemental Material S5. Results from multivariate logistic regression with all Infant Monitoring Questionnaire (IMQ, Squires, Bricker, & Potter, 1990) items at 3 years.
Supplemental Material S6. Predictive validity of the Infant Monitoring Questionnaire (IMQ, Squires, Bricker, & Potter, 1990) on language outcomes using default machine learning parameters (percentage accuracy using 10 × 10 fold cross-validation).
Armstrong, R., Symons, M., Scott, J. G., Arnott, W. L., Copland, D. A., McMahon, K. L., & Whitehouse, A. J. O. (2018). Predicting language difficulties in middle childhood from early developmental milestones: A comparison of traditional regression and machine learning techniques. Journal of Speech, Language, and Hearing Research. Advance online publication. https://doi.org/10.1044/2018_JSLHR-L-17-0210