EdTech

5 Questions to Evaluate the Efficacy of Education Technology

Efficacy, the ability to produce an intended result, is often the first and last concern of edtech buyers. And in an era of skepticism and fact-checking, taking a company’s claims that a product is effective at face value is no longer an option. To properly evaluate the efficacy of education technology, educators need to see research and results from real use cases. Unfortunately, though, the research behind educational solutions can be difficult to interpret and largely based on testimony. Several legislative actions have attempted to codify the acceptable terms of research studies, but with new legislatures every couple of years, even those requirements are changing, often leaving educators bewildered and ill-informed when evaluating an appropriate solution for their school or district. Luckily, considering these five questions can help you evaluate the efficacy of education technology.

A Little Legislation History

In the generation of No Child Left Behind, researchers tried to implement a pharmaceutical approach to research by conducting randomized controlled trials (RCTs). In a medical RCT, there would be a group of people who meet specific criteria split into a sub-group that receives a placebo while the rest of the main group receives the active drug. While this works well in a medical clinic, RCTs are difficult to implement in a school setting as classrooms have a myriad of variables that are difficult to control. Students come into classes of varying size with a range of background knowledge. There are also ethical questions around withholding opportunities for success from children if they are randomly selected for the control group.

As a result, the Every Student Succeeds Act (ESSA) of 2015 expanded the burden of proof for resources adopted by schools to include four tiers of evidence. In general, under ESSA, evidence-based education technology has a statistically significant effect on improving student outcomes. The legislation lays out the four tiers of evidence as:

  • Tier 1: Strong Evidence: At least one well-designed and well-implemented randomized controlled trial or experimental study.
  • Tier 2: Moderate Evidence: At least one well-designed and well-implemented quasi-experimental study. Unlike an RCT, a quasi-experimental study is not randomized, and students are placed in different segments based on determining factors. For example, two 5th-grade classes in the same school could be compared if one class uses the edtech product while the other does not.
  • Tier 3: Promising Evidence: At least one well-designed and well-implemented correlational study with statistical controls for selection bias. Unlike tiers 1 and 2, this is not a structured experiment, but a correlational study.
  • Tier 4: Demonstrates a Rationale: A well-specified logic model that builds on high-quality prior research or a prior positive evaluation, and a research-based rationale to believe that the intervention will likely improve student outcomes.

But do educators truly have the time to perform a meta-analysis on the research around a product they plan to implement in their school or district? Most likely, no. Instead, educators can use the following five questions to help them determine if an education technology product has been proven effective. Although other considerations need to be taken into account before investing in education technology (check out our 5 questions about shopping for blended learning), using these questions to help evaluate the efficacy of education technology is a good place to start.

5 Questions to Evaluate the Efficacy of Education Technology

1. What does it mean to be evidence-based?

Evidence-based products are programs, practices, strategies, and activities that quantitatively show they positively impact student outcomes. The studies use sound research design and are based on high-quality data analysis. Often these studies are reviewed by independent researchers to validate results.

2. How do I know if the technology I’m evaluating is evidence-based?

There are three major groups tracking evidence-based studies under the ESSA requirements. The Center for Research and Reform in Education at Johns Hopkins University runs an Evidence for ESSA website, which focuses on math and reading programs only. The Best Evidence Encyclopedia provides summaries of scientific reviews in math, reading, and science. And the What Works Clearinghouse reviews the existing research on different products, programs, and practices to provide information to educators. All three have very strict requirements for their studies and only evaluate studies that meet tier 1 or tier 2 requirements.

If a product does not appear in one of those three tools, it is not automatically disqualified from being research-based. Many providers include efficacy studies on their websites, and those studies may still be quality research that meets ESSA standards. For example, Edgenuity® includes a customizable report that shows student success by state based on pre- and post-lesson quizzes in addition to other studies, research briefs, and whitepapers.

3. How can I evaluate the quality of the research?

Assuming the product you are considering claims to be grounded in research, educators must then evaluate the quality of the research itself. Digital Promise worked with colleagues at the Johns Hopkins University Center for Research and Reform in Education to create a tool to analyze product evaluation studies. This rubric helps educators compare the results of a study to the needs of their specific school or district while also evaluating the study’s quality and adherence to ESSA standards.

4. Was a research-based approach used to develop the product?

While ESSA discusses the efficacy of the product and its effect on student outcomes, it is important to also consider how the product was developed. Was this built based on assumptions, or did the company use the latest research to develop a product that would work for students based on pedagogical practices? As education technology is becoming more pervasive, it is essential that technology is being used to enhance learning, and not just “technology for technology’s sake,” says one district administrator. In fact, Interactive Educational Systems Design argues that quality tools should:

      • Provide systematic and explicit instruction, designed to help students acquire, practice, and apply skills and knowledge
      • Promote deep learning and metacognition
      • Incorporate multimedia to reduce cognitive load and help students learn more effectively
      • Implement principles of Universal Design for Learning, incorporating multiple means of representation, expression, and engagement to meet students’ individual needs

5. Will it meet my students’ unique needs?

Assuming the education technology under evaluation meets the above criteria, educators may find it helpful to list the specific needs of their student body and compare them to the attributes included in the tool. For example, a school with a high population of English language learners, student athletes, or those with special education needs must take the needs of those students into account when evaluating a solution. Ensure that the program is NCAA® approved for your student athletes, has proven supports suggested by third-party researchers like WIDA for your ELL students, and includes customization tools to accommodate a student’s 504 plan or IEP. Similarly, if you’re looking for a program that will accommodate your on-level students, there may be different features (like enrichment tools or technology-enhanced items) that will appeal to you.

Regardless of what solution you choose, taking the time to evaluate the efficacy of education technology will help justify your decision to stakeholders, including administrators, teachers, board members, and even parents and students. Furthermore, the education technology program could be funded using ESSA allocations, which require efficacy. Title I funds and seven competitive grant programs under ESSA award more preference points to programs that are supported by the top three tiers of research, and funds from Titles II–IV require evidence from all four tiers of research. By using available resources and a little critical thinking, leaders can choose education technology solutions that are research-backed and appropriate for their specific needs. And, as one principal recently said, “If you’re doing things that are good for kids and research-based, you can’t go wrong.”

Sources

Carolan, J., & Zielezinski, M. (2019, May 21). Debunking the ‘gold standard’ myths in edtech efficacy. EdSurge. Retrieved from https://www.edsurge.com/news/2019-05-21-debunking-the-gold-standard-myths-in-edtech-efficacy

Efficacy in educational technology: guidelines for evaluating what really works. (2017, December 5). Lexia. Retrieved from https://www.lexialearning.com/blog/efficacy-educational-technology-guidelines-evaluating-what-really-works

Francisco, A. (2015, December 4). How strong is the evidence? A tool to evaluate studies of ed tech products. Digital Promise. Retrieved from https://digitalpromise.org/2015/12/04/how-strong-is-the-evidence-a-tool-to-evaluate-studies-of-ed-tech-products/

Interactive Educational Systems Design, Inc. (2013, February). How Edgenuity courses align with research on effective instruction. Retrieved from https://www.edgenuity.com/wp-content/uploads/2017/01/Foundations-Paper-2.pdf

About the Author

avatar

Emily Kirk

After growing up in the Phoenix area, Emily escaped the heat to study in Flagstaff where she graduated from Northern Arizona University with a BA in Art History. She went on to work and study at The University of Phoenix, earning her MBA. After volunteering to teach English in Chile for a semester, she worked in sales and marketing for a major ocean freight carrier. Throughout her career, Emily has also taught ballet, so she is thrilled to be part of the Where Learning Clicks team where she can combine her love of teaching and business acumen to help transform classrooms.