ASHA journals
S1_JSLHR-22-00180ruan.mp4 (12.17 MB)

Compare language input measures (Ruan et al., 2023)

Download (12.17 MB)
posted on 2023-03-31, 18:42 authored by Yufang Ruan, Adriel John Orena, Linda Polka

Purpose: Measuring language input, especially for infants growing up in bilingual environments, is challenging. Although the ways to measure input have expanded rapidly in recent years, there are many unresolved issues. In this study, we compared different measurement units and sampling methods used to estimate bilingual input in naturalistic daylong recordings.

Method: We used the Language Environment Analysis system to obtain and process naturalistic daylong recordings from 21 French–English bilingual families with an infant at 10 and 18 months of age. We examined global and context-specific input estimates and their relation with infant vocal activeness (i.e., volubility) when input was indexed by different units (adult word counts, speech duration, 30-s segment counts) and using different sampling methods (every-other-segment, top-segment).

Results: Input measures indexed by different units were strongly and positively correlated with each other and yielded similar results regarding their relation with infant volubility. As for sampling methods, sampling every other 30-s segment was representative of the entire corpus. However, sampling the top segments with the densest input was less representative and yielded different results regarding their relation with infant volubility.

Conclusions: How well the input that a child receives throughout a day is portrayed by a selected sample and correlates with the child’s vocal activeness depends on the choice of input units and sampling methods. Different input units appear to generate consistent results, while caution should be taken when choosing sampling methods.

Supplemental Material S1. Video-animated guide of Figure 2.

Ruan, Y., Orena, A. J., & Polka, L. (2023). Comparing different measures of bilingual input derived from naturalistic daylong recordings. Journal of Speech, Language, and Hearing Research, 66(5), 1618–1630.


This project is supported by the China Scholarship Council–McGill Joint Fellowship (201706040050) and Fonds de Recherche du Québec – Société et Culture Doctoral Research Scholarship (2022-B2Z-303687) to Yufang Ruan as well as by grants from the Social Sciences and Humanities Research Council of Canada (410-2015-0385) and from the Centre for Research on Brain, Language and Music to Linda Polka.