The Miami Herald recently re-published an op-ed originally appearing in The Conversation. In the essay, education professors Christopher Lubienski and Joel Malin wonder why states keep adopting and expanding private school choice programs when such initiatives, in their view, have failed. Their answer essentially is that school choice researchers and advocates have duped policymakers by “moving the goalposts,” emphasizing the positive effects of choice on outcomes besides the test scores that advocates initially stressed to sell the programs. Upon close examination, the Lubienski and Malin argument argument collapses like a house of cards.
(Corey DeAngelis has already published an evidence-rich response to Lubienski and Malin at The Washington Examiner that covers some of the flaws I discuss here.)
The Lubienski and Malin argument has four main components. First, programs must be evaluated only based on their effects on the outcomes promised by their advocates. Second, private school choice programs were sold to policymakers solely based on the promise that they would boost the test scores of participants. Third, recent evaluations of private school choice programs have demonstrated that all of the programs have permanent negative effects on the test scores of participants. And finally, fourth, “teams from the University of Arkansas” (meaning me, Jay Greene, and our fantastic doctoral students) only started focusing the attention of the academic and policymaking community on the non-test-score effects of school choice, including its effects on educational attainment and civic values, after the test-score outcomes from choice evaluations turned negative. All four of those claims are false.
Lubienski and Malin argue that programs only can be judged successful if they are effective at producing the specific outcomes promised by their supporters. Evaluators are unfairly “moving the goalposts” if they instead discover that a program has positive effects that weren’t originally predicted. On its face, their claim is ridiculous. Why would it be preferable to know less about a program’s effects than to know more? Why would anyone think that a narrowly-focused program evaluation is superior to a comprehensive one?
Certainly program evaluators themselves do not ascribe to the Lubienski and Malin position. For example, the authors of a prominent textbook on program evaluation state “Although input from stakeholders is critical, the evaluator should not depend solely on their perspective to identify the issues the evaluation will address.” They, instead, argue that program evaluations should be expansive, including considerations of all relevant outcomes that might be influenced by the initiatives. “[W]e must caution against an overly narrow interpretation of what information is useful,” the experts write. Research questions should come from multiple sources including evaluator expertise, program theory, and statutory mandates, in addition to “stakeholder claims.” As someone who teaches a graduate course in program evaluation and actually conducts comprehensive evaluations of private school choice programs regularly, I’m on the side of the evaluation experts: more knowledge is better than less.
But let’s assume, for the sake of argument, that Lubienski and Malin are correct and choice programs should be evaluated solely based on the outcomes promised by their supporters. Their next claim is that voucher advocates sold choice to the public solely based on predictions that such programs would boost participant test scores. A wealth of evidence contradicts that claim.
Milton Friedman, the originator of the modern idea of private school choice, justified choice mainly in terms of parent empowerment, the positive effects of competition on the performance of affected public schools, and civic values. Throughout the seminal essay that launched school vouchers, Friedman discusses the civic outcomes of government-run and privately run schools. After pointing out that government-run schools of the 1950s are highly stratified by race because they draw students based on segregated neighborhoods, and private schools of the 1950s are highly stratified by income because low-income families can’t afford the tuition, he writes that “The widening of the range of choice under a private system would operate to reduce both kinds of stratification.”
John Chubb and Terry Moe did focus on the positive effects of private schooling on participant test scores in their book Politics, Markets, and America’s Schools. Still, much of the debate over the desirability of private school choice always has centered on the questions of what effects it has on school-level racial segregation, the students “left behind” in public schools, and student non-cognitive outcomes such as civic values. Lubienski himself, in a book co-authored with Sarah Theule Lubienski, states “our analyses and the analyses of others indicate that [school choice] efforts can create a…more socially segregated system of schooling.” In that same book, the Lubienskis focus attention on the question of the competitive effects of school choice, denigrating the more than 20 rigorous studies of the question that show positive effects and applauding various critiques of this deep and vital research base. Since they study more than just the test-score effects of school choice, why can’t other people?
Choice skeptic Henry Levin of Teachers College has long argued that private school choice programs should be evaluated based on their effects on four outcomes: expanding options for parents, productive efficiency (including effects on educational attainment as well as test scores), racial integration, and civic values (see here & here). All six of my longitudinal evaluations of private school choice programs have tried to cover all four of the outcomes recommended by Levin, apparently to the chagrin of Lubienski and Malin. I didn’t move the goalposts in 2019. If anyone did, it was Henry Levin, way back in 1998.
So, we have determined that it flies in the face of professional standards to limit school choice evaluations only to their effect on participant test scores, and that questions of school integration, the competitive effects of choice on nonparticipants, and the non-achievement effects of choice on participants long have been central to debates over school choice. What about the Lubienski and Malin claim that all of the recent choice evaluations have found that choosers suffer initial achievement losses that they never make up? That statement is a gross exaggeration. Lubienski is notorious for cherry-picking only the private school choice results that confirm his ideological bias. That bias is on full display in this latest essay, where he and his co-author refer to the early set of 10 experimental studies reporting largely positive test-score effects of school choice as “a small set of studies” while somehow the recent set of only four studies, three of which show some enduring negative achievement effects, is a larger and more convincing evidence base.
A more accurate characterization of findings from recent studies is that some private school choice initiatives have persistent negative effects on student achievement, mainly in math. Only David Figlio and Krzysztof Karbownik’s quasi-experimental evaluation of the Ohio EdChoice program found statistically significant negative achievement effects in reading in the final year of the evaluation that were robust to different analytic techniques. The negative reading impacts of the Indiana Scholarship Program were sensitive to changes in samples and student matching techniques. The negative reading effects of the Louisiana Scholarship Program similarly were inconsistent across samples and statistical models, a vital detail that Lubienski and Malin omit from their essay. The Ohio, Indiana, and Louisiana evaluations all reported negative effects on math achievement in the final year of the evaluations that were robust. Importantly, the recently concluded evaluation of the D.C. Opportunity Scholarship Program found that initial negative effects of that program on student math scores that were trumpeted in the Lubienski and Malin essay disappeared completely by the third and final year of the evaluation. In direct contradiction to a Lubienski and Malin claim, students in one of the recent private school choice evaluations completely made up the initial achievement ground they lost by switching to a private school.
The final claim by Lubienski and Malin is the easiest to debunk. They assert that the University of Arkansas research team only focused attention on the civic values and attainment effects of private school choice after the test-score effects started coming up negative (with the Figlio & Karbownik study in 2016). Jay Greene and I are political scientists by training. From the very beginning of our long careers evaluating school choice programs, we have focused on the impacts of choice on civic outcomes. Jay started publishing evaluations of the effect of private schooling on civic values way back in 1998, 18 years before the first school choice evaluation reporting negative test score effects (see here, here, and here). My first school choice study focused on its effects on student civic values, including political tolerance, voluntarism, and patriotism. I followed that up that 2001 publication with a co-edited book on the topic, way back in 2004, and a systematic review of the many studies of school choice and civic values in 2007.
My research team was the first to report the effects of a private school choice program on high school graduation rates, not because I wanted to “move the goalposts” but because the U.S. Congress directed in law that educational attainment be an outcome evaluated in our study. That’s right, policymakers themselves, in their wisdom, have demanded that non-test-score outcomes be the subject of private school choice evaluations. As an evaluator, I didn’t move the goalposts. I just kicked the ball through them.
Patrick J. Wolf is a distinguished professor and 21st Century Chair in School Choice in the Department of Education Reform at the University of Arkansas College of Education and Health Professions.
A version of this article orginally was published at RedefineEd.