The University of Iowa

The Importance of Fidelity When Implementing Reading Interventions

Three teachers learning new literacy intervention

Literacy leaders such as an instructional coach can help teachers to ensure they are implementing a new literacy intervention as it was intended to be used.


Posted on: February 6, 2018

Editor's Note: This is the first of a multi-part series on defining, measuring, and using fidelity to improve literacy instruction.

Educational researchers and practitioners concerned with implementing a specific curriculum, program, practice, or strategy (collectively termed interventions) frequently refer to fidelity, or how closely the implementation was aligned to the way the intervention was designed. Although the concept of fidelity seems simple, in reality it is quite complex and not yet well understood (Harn, Damico, & Stoolmiller, 2017). 

Why is Fidelity Important?

Substantial research goes into designing interventions, determining the key ingredients, and building the evidence base over multiple studies to conclude that an intervention is effective (Abry, Hulleman, & Rimm-Kaufman, 2015). Thus, when implementing an intervention in classrooms, it is important to implement it as intended, or with fidelity, to increase the likelihood of consistently obtaining the results you are looking to achieve (Harn et al., 2017; Harn, Parisi, & Stoolmiller, 2013). Although some degree of teacher adaptation is anticipated, interventions implemented with higher fidelity tend to be more effective (Quinn & Kim, 2017). The relationship might be described as: “Effective Interventions X Effective Implementation = Improved Outcomes” (Fixsen, Blase, Metz, & Van Dyke, 2013).

Additionally, the 2015 authorization of the Every Student Succeeds Act (Pub. L. No. 114-95, 2015) mandates that schools use “evidenced-based” interventions, meaning they are supported by research establishing their effectiveness (Lam, Mercer, Podolsky, & Darling-Hammond, 2016; United States Department of Education, 2016). A previous IRRC blog post provides additional information on the importance of using evidence-based approaches whenever possible, and how “evidence-based” can be distinguished from “research-based.” However, using strong evidence-based interventions is not enough. It is important to use interventions as intended, or with fidelity, so that the aspects that make the intervention effective (as proven by evidence) are not lost due to modifications or deviations by those using it.

Conceptualizing Fidelity

The concept of fidelity was developed by researchers to monitor whether interventions were implemented as intended within a research study, thus making it possible to conclude that results were explained by the interventions and not factors related to variance of implementation (Harn et al., 2013). Although still a major part of research on developing and testing interventions, fidelity has become more routinely monitored in schools during implementation as well (Harn et al., 2013). For example, when a new intervention is being implemented, rather than assuming teachers are implementing it with fidelity because they have the same materials and attended the same professional development, administrators or coaches may monitor fidelity. This can be helpful to determine if there are systemic areas of difficulty – perhaps something was not covered thoroughly during professional development. It can also be helpful to identify areas for individual improvement.

Let us consider a full implementation scenario to see the impact fidelity can have. Two first-grade teachers in the same school with similar student achievement levels and student behavioral needs were asked to implement a new core reading curriculum. The curriculum included whole-class and small-group instruction daily in the 90-minute literacy block. Both teachers received the same professional development and access to all materials. After 4 weeks, an instructional coach followed up with the teachers. The coach specifically asked how the teachers implemented the curriculum. One teacher reported implementing the curriculum’s whole-group and small-group instruction every day for the full 90-minute literacy block. The other teacher reported providing the curriculum’s whole-group instruction for 60 minutes and then using other materials for small-group instruction for the remaining 30 minutes. When the instructional coach checked the students’ universal screening data, she noticed that students of the first teacher were making greater gains than those of the second teacher.

In the above example, the first teacher implemented the curriculum with greater fidelity than the second teacher, which is likely why the first teacher’s students had greater gains than the second teacher’s students. Additionally, one of the key ingredients of the core reading curriculum was small-group instruction. Therefore, when the second teacher chose to use different small-group materials, it could no longer be assumed that the teacher was using the intervention that had evidence proving it to be effective.

In this post, we introduced the concept of fidelity–how closely the implementation is aligned to the way the intervention was designed. We also explained why fidelity is important: it increases the likelihood of consistently obtaining the results you are looking to achieve and maintains the intervention’s evidence basis.

However, fidelity is much more than just “using the materials” for the right amount of time. In the above example, although the first teacher provided both whole- and small-group instruction as the curriculum recommended, it does not guarantee that the fidelity was high. There are many different components to fidelity. Our next post in the series will go into more depth on how fidelity is operationalized and measured.


Abry, T., Hulleman, C. S., & Rimm-Kaufman, S. E. (2015). Using indices of fidelity to intervention core components to identify program active ingredients. American Journal of Evaluation, 36, 320–338. doi:10.1177/1098214014557009

Every Student Succeeds Act of 2015, Pub. L. No. 114-95, § 2002, 114 Stat. 1177 (2015-2016). Retrieved from

Fixsen, D., Blase K., Metz, A., & Van Dyke, M. (2013). Statewide implementation of evidence-based programs. Exceptional Children, 79(2), 213–230. doi:10.1177/001440291307900206

Harn, B. A., Damico, D. P., & Stoolmiller, M. (2017). Examining the variation of fidelity across an intervention: Implications for measuring and evaluating student learning. Preventing School Failure: Alternative Education for Children and Youth, 61, 289–302. doi:10.1080/1045988X.2016.1275504

Harn, B. A., Parisi, D., & Stoolmiller, M. (2013). Balancing fidelity with flexibility and fit: What do we really know about fidelity of implementation in schools? Exceptional Children, 79, 181–193. doi:10.1177/001440291307900204

Lam, L., Mercer, C., Podolsky, A., & Darling-Hammond, L. (2016). Evidence-based interventions: A guide for states. Palo Alto, CA: Learning Policy Institute.

Quinn, D. M., & Kim, J. S. (2017). Scaffolding fidelity and adaptation in educational program implementation: Experimental evidence from a literacy intervention. American Educational Research Journal, 54, 1187–1220. doi:10.3102/0002831217717692

United States Department of Education. (2016). Using Evidence to Strengthen Education Investments (Non-Regulatory Guidance). Washington, DC: United States Department of Education. Retrieved from