posted on 2015-06-01, 00:00authored byWilliam D. Hula, Stacey Kellough, Gerasimos Fergadiotis
Purpose The purpose of this study was to develop a computerized adaptive test (CAT) version of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996), to reduce test length while maximizing measurement precision. This article is a direct extension of a companion article (Fergadiotis, Kellough, & Hula, 2015), in which we fitted the PNT to a 1-parameter logistic item-response-theory model and examined the validity and precision of the resulting item parameter and ability score estimates.
Method Using archival data collected from participants with aphasia, we simulated two PNT-CAT versions and two previously published static PNT short forms, and compared the resulting ability score estimates to estimates obtained from the full 175-item PNT. We used a jackknife procedure to maintain independence of the samples used for item estimation and CAT simulation.
Results The PNT-CAT recovered full PNT scores with equal or better accuracy than the static short forms. Measurement precision was also greater for the PNT-CAT than the static short forms, though comparison of adaptive and static nonoverlapping alternate forms showed minimal differences between the two approaches.
Conclusion These results suggest that CAT assessment of naming in aphasia has the potential to reduce test burden while maximizing the accuracy and precision of score estimates.
In this article, we extend work presented in a companion article (Fergadiotis, Kellough, & Hula, 2015) to construct and evaluate an item response theory (IRT)–based computer adaptive version of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996). Using simulations based on responses previously collected from participants with aphasia, we evaluated agreement between computer adaptive short forms and the full PNT and compared the results to those obtained using recently developed static short forms (Walker & Schwartz, 2012). We also evaluated the equivalence of alternate test forms created by the adaptive-testing algorithm.
This research was supported by VA Rehabilitation Research & Development Career Development Award C7476W, and the VA Pittsburgh Healthcare System Geriatric Research Education and Clinical Center. The contents of this paper do not represent the views of the Department of Veterans Affairs or the United States Government.