An Examination of Assessment Fidelity in the Administration and Interpretation of Reading Tests

Abstract: 

Researchers have expressed concern about implementation fidelity in intervention research but have not extended that concern to assessment fidelity, or the extent to which pre-/posttests are administered and interpreted as intended. When studying reading interventions, data gathering heavily influences the identification of students, the curricular components delivered, and the interpretation of outcomes. However, information on assessment fidelity is rarely reported. This study examined the fidelity with which individuals paid to be testers for research purposes were directly observed administering and interpreting reading assessments for middle school students. Of 589 testing packets, 45 (8% of the total) had to be removed from the data set for significant abnormalities and another 484 (91% of the remaining packets) had correctable errors only found in double scoring. Results indicate reading assessments require extensive training, highly structured protocols, and ongoing calibration to produce reliable and valid results useful in applied research.

Citation: 

Reed, D. K., & Sturges, K. M. (2013). An examination of assessment fidelity in the administration and interpretation of reading tests. Remedial and Special Education, 34, 259-268. doi: 10.1177/0741932512464580

Audience: 
IRRC Researcher: