To Broaden Evidence Use Beyond the Federal Law’s Requirements, Use Common Sense

Consider how the costs and benefits of one strategy compare to the alternatives

Covid-19 has spawned what feels like an endless flow of decisions for leaders, practitioners, and families. It seems smart to base decisions on evidence—but what counts?

As the authors of a new book about evidence, we’ve thought a lot about this. The Oxford English Dictionary defines evidence as, “Grounds for belief; facts or observations adduced in support of a conclusion or statement; the available body of information indicating whether an opinion or proposition is true or valid.” This concept of “the available body of information” rings true to us; it also runs counter to common perceptions of evidence as a single scientific finding, considered in a vacuum.

We decided to write our book in part in response to the confusion we heard from education leaders about what “counts” as evidence under the Every Student Succeeds Act, or ESSA. ESSA, enacted in 2015, aimed to categorize interventions and programs into four tiers based on the rigor of the methods used to evaluate them. The goal is worthy—to help educators choose interventions that caused improvements in student outcomes, rather than those that were simply correlated with better outcomes. But the end result ignores other opportunities to use evidence, as well as critical subjective judgment calls educators ultimately must make for themselves. Luckily, it isn’t that hard to learn how to use evidence for yourself in a more nuanced way.

To start, we advocate for using evidence on a regular basis, not just when choosing an intervention by consulting resources like the What Works Clearinghouse or EdReports. Opportunities for evidence use fall into three main categories:

  • To diagnose a problem —because understanding why your problem exists is a big part of solving it;
  • To assess implementation of a strategy— because when “research says” something is effective, it was usually evaluated under the best-case scenario with lots of support for strong implementation, but that’s not usually how strategies get put into place in the real world; and
  • To evaluate the impact of a strategy—because you want to know if the strategy (not something else that might be correlated with it) caused an improvement in an outcome you value and can measure.

ESSA’s tiers of evidence only consider the methods and findings of research that falls into this last bucket, evaluating the impact of a strategy. This leaves out a lot of important information about what strategies leaders might want to try and how they should adjust course based on past experiences. By asking questions that other types of information can answer, about diagnosing problems and assessing implementation, education leaders can gain valuable information to make better decisions. Sometimes these answers will come from using your own data rather than looking to existing evaluations. Sometimes they will come from other forms of research, like qualitative research on barriers to remote learning for students with disabilities, or more basic scientific research, like on aerosol flows.

The evidence terms in ESSA were motivated by a reasonable set of concerns. There is a lot of low-quality evidence out there, and there are a lot of vendors seeking to market their wares as evidence-based. ESSA focuses on statistical significance, and random (or random-like) variation as ways to distinguish good research from bad; these are objective ways to rank research studies, well suited to a potential audit over use of federal funds, but not the only quality criterion that matters. Don’t get us wrong, all else equal, we like statistically significant findings derived from randomized controlled trials more than statistically insignificant findings that can’t account for why different students or schools received different treatments in the first place. But all else is rarely equal, and to get the most out of research, you need to assess it more subjectively, based on your individual circumstances and goals.

To do this, consumers of evidence should start with relevance. Is the finding germane to your context, or did the study take place in a setting that differs in ways that render it irrelevant? This is a huge question for education research in the present moment, when the body of research was almost entirely generated in brick-and-mortar schools. Now leaders need to learn about remote learning, and what learning takes place in person must occur under dramatically different conditions than in the past.

Next, ask if it is convincing—and know that evidence can be convincing even without a randomized controlled trial. For example, if a school district decides to send 2nd graders to school more frequently than 3rd graders in the coming year, it creates a natural experiment for studying the impact of how often students attend school. While this may seem ridiculous—more is better, right?—districts facing budget constraints will want to know how much more and how much better, as they face tradeoffs between things like investing in technology for distance learning and increasing the amount of time students meet in person.

Even when research is relevant and convincing, you don’t just want to know if something “works” or doesn’t—you need to assess its practical significance. This means thinking about the feasibility of implementing a strategy, and how the costs of doing so compare to the benefits. Finally, for next-level evidence use, don’t just consider one strategy in isolation. Instead, consider how this cost-benefit analysis of one strategy compares to the alternatives.

None of this inquiry is prohibited under ESSA. In most cases, common-sense evidence widens the lens from what ESSA requires, asking more types of questions and considering more types of information to answer them. While common-sense evidence is about the available body of information, ESSA allows you to defend a practice as evidence-based with a single study—even if other studies, potentially more relevant studies, reach contrary conclusions. Since you can find a study to show just about anything, ESSA’s evidence requirements sadly have turned into compliance exercises for many districts.

Our plea: don’t let the compliance exercise dissuade you from doing the harder, and more important work of engaging genuinely with evidence. It’s just good common sense.

Nora Gordon is an associate professor at the McCourt School of Public Policy at Georgetown University. Carrie Conaway is a senior lecturer on education at the Harvard Graduate School of Education and former chief strategy and research officer for the Massachusetts Department of Elementary and Secondary Education. They are the authors of Common-Sense Evidence: The Education Leader’s Guide to Using Data and Research (Harvard Education Press, 2020).

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College