Harvard’s Dan Koretz is just out with a thoughtful, immensely readable book that takes dead aim at test-based accountability. The volume is titled The Testing Charade: Pretending to Make Schools Better (U. Chicago Press, 2017), which is a pretty pithy summation of his argument. I’ve had the chance to read the book, and yesterday Dan visited AEI to discuss it with Brookings scholar (and former IES Commissioner) Russ Whitehurst and Nina Rees, president of the National Alliance for Public Charter Schools. You can see their lively discussion here.
There’s much to say on testing and accountability in 2017. But today I’ll stick to offering just five thoughts sparked by the book and yesterday’s exchange.
1. We have done a lot of genuinely stupid stuff in the name of “accountability.” Koretz notes that he was moved to write this book partly in response to some of the inane teacher-evaluation practices that sprung in the post-Race to the Top rush. For instance, he opens the book by talking about Shauna Paedae, a National Board Certified math teacher in Florida with a master’s in stats and three decades of experience. Because Paedae taught advanced math to eleventh and twelfth graders, while the Florida FCAT only tested students through grade eight, 50 percent of her evaluation was based “on the school-wide performance of students taking the tenth-grade FCAT reading test—a test in a different subject administered . . . to different students in an earlier grade” (p. 3). Some can try to rationalize this kind of inanity if they wish, but I see it as mindless, dangerous, and destructive. It insults educators and breeds cynicism. And yet the sheer amount of this sort of thing that has been imported under the mantra of “accountability” is as remarkable as it is appalling.
2. Unrealistic goals breed gamesmanship, manipulation, and, often, outright cheating. Koretz is right on this one. As Checker Finn and I put it a decade ago in Education Next:
[No Child Left Behind] promise[d] that every U.S. schoolchild will attain “proficiency” in reading and math by 2014. Noble, yes, but also naive, misleading, and in some respects dysfunctional . . . No educator believes that universal proficiency in 2014 is attainable. Only politicians promise such things. The inevitable result is weary cynicism among school practitioners . . . In hindsight, NCLB’s passage was less about improving schools or fostering results-based public sector accountability than about declaring fealty to a gallant but utopian ambition.
The easiest thing in the world is for politicians or bureaucrats to dream up arbitrary, impressive-sounding targets for schools; after all, they don’t have to do any of the work, and they’re rarely held responsible for hitting the results. Yet, even after all we went through with NCLB, today’s ESSA discussions continue to feature debates about whether unserious, pie-in-the-sky targets are sufficiently “ambitious.” It’s almost as if we haven’t learned a thing.
3. It matters immensely how and why scores go up. And yet, advocates and policymakers seem dreadfully uncurious about that. The casual ease with which I hear schools and teachers labeled as “good” or “bad” based solely on test scores is disconcerting, because it actually matters quite a lotwhy scores move. Koretz’s volume offers an accessible master class in how to think about all this. But, for ease and consistency’s sake, I’ll just reiterate what I noted back in June:
There are at least six reasons that scores may be going up:
• For a variety of reasons, students may be learning more reading and math. The tests are simply picking that up. All good.
• Students may be learning more in general. And the reading and math scores are a proxy for that. Even better.
• Instructional effort being shifted from untested subjects and activities to the tested ones (e.g. to reading and math). Not great if we value the full breadth of the curriculum, but potentially a reasonable decision to reemphasize reading and math.
• Teachers are learning what gets tested and students are becoming increasingly acclimated to the tests.
• Schools are focused on preparing kids for tests and engaged in test preparation so that the scores improve even if students aren’t learning.
• Scores are being manipulated in various ways. This can mean things as perfidious as cheating or as mundane as starting the school year earlier.
Only the most obtuse can look at this list and then imagine that “it doesn’t matter” why scores go up. Yet, barely a day goes by when I’m not having a conversation or reading something in which movement in reading and math scores is treated as a self-evident, unqualified measure of learning.
4. Tests have an important and valuable role. Koretz takes pains to acknowledge this, but it will surely get lost in the shuffle. As Whitehurst noted, the book tends to read like an anti-testing tract, no matter what qualifiers Koretz offers. But tests are a valuable tool for getting a read on the educational landscape, checking our lazy assumptions and biases, and informing classroom instruction and schoolwide decisions. That’s especially true if you agree with Rees when she argues that she’s never seen a good school with bad test scores or a bad school with good scores, but it’s true even if you’d take issue. Blistering critiques of testing would be more constructive and better received, I suspect, if those making them did a better job of acknowledging the uses of testing and were less sweeping and more specific when calling for change.
5. There are real trade-offs in all of this. Koretz does a nice job of exploring this in his concluding chapters, but he’s pretty convinced that the benefits of test-based accountability are minuscule and are dwarfed by the costs. I’m sympathetic. But I’m also swayed by the argument, forcefully articulated by Whitehurst, who points to the body of empirical work suggesting that test scores—and value-added differences—matter. The upshot is we need to wrestle with how best to judge and balance the costs and benefits of test-based accountability. In that light, it’s worth keeping in mind the realities raised by Rees when she points out that the alternatives to testing that we’ve tried (such as observers and classroom audits) are expensive and have an unimpressive record of success in the US. What we need is a frank, measured discussion of alternatives and how to improve the existing system. Of course, that would mark a sharp break with our 21st century tradition of simple-minded clashes between those celebrating the wonders of test-based accountability and those who denounce it as a threat to the republic.
As I noted up top, these are just some initial reactions to Koretz’s engaging book. If you’d like, you can pick up a copy of The Testing Charade yourselves, or watch yesterday’s discussion here.
— Frederick Hess
Frederick Hess is director of education policy studies at AEI and an executive editor at Education Next.
This first appeared on Rick Hess Straight Up.