ASHA journals
Browse
TEXT
S1_JSLHR-23-00273ghasemzadeh.m (4.45 kB)
TEXT
S2_JSLHR-23-00273ghasemzadeh.m (4.55 kB)
TEXT
S3_JSLHR-23-00273ghasemzadeh.m (0.91 kB)
ARCHIVE
S4_JSLHR-23-00273ghasemzadeh.zip (84.46 kB)
1/0
4 files

Power analysis and reducing overfitting in machine learning (Ghasemzadeh et al., 2024)

software
posted on 2024-02-22, 15:44 authored by Hamzeh Ghasemzadeh, Robert E. Hillman, Daryush D. Mehta

Purpose: Many studies using machine learning (ML) in speech, language, and hearing sciences rely upon cross-validations with single data splitting. This study’s first purpose is to provide quantitative evidence that would incentivize researchers to instead use the more robust data splitting method of nested k-fold cross-validation. The second purpose is to present methods and MATLAB code to perform power analysis for ML-based analysis during the design of a study.

Method: First, the significant impact of different cross-validations on ML outcomes was demonstrated using real-world clinical data. Then, Monte Carlo simulations were used to quantify the interactions among the employed cross-validation method, the discriminative power of features, the dimensionality of the feature space, the dimensionality of the model, and the sample size. Four different cross-validation methods (single holdout, 10-fold, train–validation–test, and nested 10-fold) were compared based on the statistical power and confidence of the resulting ML models. Distributions of the null and alternative hypotheses were used to determine the minimum required sample size for obtaining a statistically significant outcome (5% significance) with 80% power. Statistical confidence of the model was defined as the probability of correct features being selected for inclusion in the final model.

Results: ML models generated based on the single holdout method had very low statistical power and confidence, leading to overestimation of classification accuracy. Conversely, the nested 10-fold cross-validation method resulted in the highest statistical confidence and power while also providing an unbiased estimate of accuracy. The required sample size using the single holdout method could be 50% higher than what would be needed if nested k-fold cross-validation were used. Statistical confidence in the model based on nested k-fold cross-validation was as much as four times higher than the confidence obtained with the single holdout–based model. A computational model, MATLAB code, and lookup tables are provided to assist researchers with estimating the minimum sample size needed during study design.

Conclusion: The adoption of nested k-fold cross-validation is critical for unbiased and robust ML studies in the speech, language, and hearing sciences.

Supplemental Material S1. Compute_NestedModelConfidence.m

Supplemental Material S2. Compute_RecommendedSampleSize.m

Supplemental Material S3. Compute_RequiredSampleSize.m

Supplemental Material S4. Feature_Selection.zip

Ghasemzadeh, H., Hillman, R. E., & Mehta, D. D. (2024). Toward generalizable machine learning models in speech, language, and hearing sciences: Estimating sample size and reducing overfitting. Journal of Speech, Language, and Hearing Research, 67(3), 753–781. https://doi.org/10.1044/2023_JSLHR-23-00273

Funding

Research reported in this publication was supported by the National Institute on Deafness and Other Communication Disorders Grants T32 DC013017 (awarded to Christopher Moore and Cara Stepp), P50 DC015446 (awarded to Robert Hillman), and K99 DC021235 (awarded to Hamzeh Ghasemzadeh).

History