Rely on Local Actors, Instead of Faulty Information, To Make Judgments about School Quality

Editor’s note: This post is the third in an ongoing discussion between Fordham’s Michael Petrilli and the University of Arkansas’s Jay Greene that seeks to answer this question: Are math and reading test results strong enough indicators of school quality that regulators can rely on them to determine which schools should be closed and which should be expanded—even if parental demand is inconsistent with test results? Prior entries can be found here and here.

ednext-may16-greene-blogIt’s always nice to find areas of agreement, but I want to be sure that we really do agree as much as you suggest, Mike. I emphasized that it should take “a lot more than ‘bad’ test scores” to justify overriding parental preferences. You say that you agree. But at the end, you add that we may have no choice but to rely primarily on test scores to close schools and shutter programs—or else “succumb to ‘analysis paralysis’ and do nothing.”

This is a false dichotomy. If all we have are unreliable test scores, we don’t have to make decisions based on them or “do nothing.” Instead, we could rely on local actors who have more contextual knowledge about school or program quality. So if the charter board, local authorizer, and parents think a school is doing a good job even if test scores look “bad,” we should defer to them. That isn’t doing nothing; it’s relying on those who know more than can be gleaned from test scores. And quite often, those more knowledgeable local actors will be parents, which is why I think we should show strong deference to parental preferences. We don’t have to substitute uninformed decisions by distant regulators for those of more knowledgeable parents.

The danger with your argument—that we may have no choice but to rely on test scores—is that it rationalizes ignorant actions by policy makers whose knowledge of school or program quality consists almost entirely of test score results. Even worse, they almost always rely on levels of test results rather than gains. It’s important to emphasize how crude and inaccurate decisions based on test scores typically are, rather than to imagine them to be as sophisticated as analyses found in leading journals (which are still quite imperfect). Using only levels of test scores, regulators and policy makers are quite content to label schools serving highly disadvantaged populations as “bad.” The perverse result is that those schools trying to serve needy populations, or those that do not focus narrowly on math and reading test scores, are likely to be punished or closed.

I’m glad we agree that “it should take a lot more than ‘bad’ test scores” to “close a school or shutter a program in the face of parental demand.” And I concur that we may never be able to develop other reliable indicators of school quality to be used by distant regulators or policy makers, including measures of character skills like grit and conscientiousness. But if we’re unable to develop strong measures of school quality that can be used remotely, the logical conclusion to be drawn is not that we ought to rely on them anyway. Instead, we should rely on the judgments of those closer to the situation, including parents, who have better information about school quality.

I accept that this will sometimes mean closing schools or programs that some parents nevertheless want. But I believe that few schools with long waiting lists will also be poorly graded by local actors using their broad contextual knowledge. Of all charter schools closed by local authorizers or their own boards, the vast majority had financial problems—meaning that they generally suffered from a lack of parental demand. It will be a rarity for parental assessments of quality to be at odds with those of local authorizers making decisions based on a lot more than test scores.

So I hope that we really agree that math and reading test results are not strong enough indicators of school quality that regulators can rely on them to determine which schools should be closed and which should be expanded. This means that we should instead accept the judgments of those with much more information about school quality, and it will be extremely rare that these more informed assessments of quality will be at odds with parental preferences.

– Jay Greene

This first appeared on Flypaper

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College