In the United States, we entrust state and local leaders to make most consequential decisions affecting schools. It’s ironic, then, that the federal government funds most of the research and evaluation work in education. State and local leaders bear a responsibility to study the consequences of their decisions. We will make much faster progress when they do.
At this very moment, chief academic officers around the country are choosing professional development providers to prepare teachers for the Common Core. Districts are choosing curricula. Why can’t we provide them with better evidence to guide their choices? Or, at the very least, why can’t we compare the 2014-15 gains for those making different choices now, so that we have a clearer view of what worked going into the 2015-16 school year? Otherwise, we will continue reinventing the wheel. School leaders need to get out of the wheel reinvention business.
Basic research into teaching and learning is rightly considered a federal responsibility. If the federal government were not to fund it, no one would. However, evaluation research is different. In business and in everyday life, we expect decision-makers to perform due diligence: to launch small demonstration programs before any large-scale rollout, to track the impact of new policies and to make course corrections when the results are disappointing. The responsibility to evaluate rests with those making the decisions. That’s where the evidence will have the greatest impact.
How much should states and districts be spending to inform their own decision-making? In health care, where the federal commitment is large, federal spending on health research (primarily at the National Institutes of Health) is approximately three percent of combined federal expenditures on Medicare and Medicaid. In education, federal spending on the Institute of Education Sciences ($562 million) is a little less than one percent of the budget of the U.S. Department of Education ($69 billion). Using federal spending as a benchmark, one might expect state governments to devote between one and three percent of their education spending to research and evaluation. With state and local spending on K-12 and higher education totaling nearly $900 billion per year, just one percent would be $9 billion per year! I’ve never seen an accounting of state and district spending on research, but experience suggests it’s nowhere near that number.
So why do state and local education agencies underinvest—and what might be done to encourage them to do more?
First, the state longitudinal data systems—linking students to teachers and tracking achievement over time—are a new phenomenon. Until very recently, collecting student achievement data and tracking individual students over time were very costly when done on an ad hoc basis—and such expenses were rarely justifiable when evaluating a single program or policy. The construction of the longitudinal data systems in education has suddenly made impact evaluation in education much more feasible and cheap. When prices drop so fast, it often takes organizations a little while to respond. With state budgets recovering from the recent downturn, now may be the perfect time to point out the new opportunities for evaluation research.
Second, perhaps because they have not had access to such rich data before, state and local agencies lack the analytical capacity to make the best use of it, and the available staff is spread quite thin. Rather than try to replicate the same organizational structure as contract research firms (which are dependent on Ph.D.-level labor for their customized evaluation work), states should pursue a different model: investing in software to automate the evaluation process, using standard algorithms to identify comparison students and schools and hiring master’s level analysts to manage the process. (Harvard’s Strategic Data Project is one source for such analysts.) Such a system could be faster and cheaper than the traditional contract research model.
Third, there is no organized constituency demanding better evidence of impact. This is where philanthropists and business groups could help. Philanthropists often support school-based programs (such as an after-school program or a scholarship program) without knowing whether their support made a difference. Businesses provide tutors and mentors and summer jobs for local students and don’t see the impacts of their contributions. They, as well as state legislators and school board members, should be demanding greater transparency into impacts. If they were to organize, they could pressure state and local agencies to provide them with better evidence.
Finally, unlike entrepreneurs who profit whenever they find a better way to do something, as well as when they find things that aren’t adding to the bottom line, public managers often have more to lose than to gain from better evidence. They may prefer not to know when a program is not working. (The same is true of managers inside bureaucratic organizations in the private sector.) As a result, they have a conflict of interest in making decisions and conducting the research in-house. To safeguard against such conflicts, state and local agencies should empower a semi-autonomous body—an “ombudsman for research”—to do much of the work involved. When a third party is responsible for evidence gathering, they can be held accountable for the speed and quality of their work (if not for the specific results).
Isn’t it more efficient for the federal government to do this? Won’t states be duplicating efforts? Not necessarily. Locally derived evidence will be more influential in local policy debates. Moreover, the impact of any intervention will depend on local conditions. The effect of universal pre-school depends on the availability of non-subsidized alternatives, regulations governing program quality, the presence of skilled teachers—all of which may vary by site. However, even if the impacts vary, an evaluation in one state may still be relevant the decisions of others. Therefore, each state, acting on its own, may still underinvest. The states themselves could form consortia, pooling resources to answer common questions of interest. (That will be much more feasible now that many states will be using similar assessments.) Moreover, the federal government could provide matching funds to encourage state and local districts to invest in evaluation research—but it shouldn’t try to fund all such work on its own.
-Tom Kane
This first appeared on the Brown Center Chalkboard.