SIG’s Downfall: Judge, and Be Judged, by Proficiency Rates

During your pre-Turkey trot out of town, you might have missed the big news about the federal School Improvement Grants program. Education Week’s Politics K–12 blog, which generously called the results a “mixed picture,” put it this way:

While more than two-thirds of schools in the first cohort (which started in 2010-11) saw gains in reading and math after two years in the program, another third of the schools actually declined, despite the major federal investment, which included grants of up to $500,000 a year, the department said Thursday.

Even Arne Duncan himself could muster only faint praise for the schools, acknowledging their “incremental progress.”

Andy Smarick, a long-time SIG skeptic, showed a great deal of holiday cheer in not shouting, “I told you so.” But neither was he shy in stating the obvious, labeling the results “disappointing but completely predictable.” He went a bit further in a Washington Post quote, arguing that “you can’t help but look at the results and be discouraged . . . . We didn’t spend $5 billion of taxpayer money for incremental change.”

And lest you say, “Hey, it’s only money,” pause a moment on that. Five billion dollars. Enough to control Malaria. Enough to implement Core Knowledge in every single elementary school in the country. Enough to keep 2,000 Catholic schools alive.

And instead, we poured the money into dysfunctional schools that, in all likelihood, look the same today as similar high-poverty, low-performing schools that didn’t get the money. We could have just piled up the cash and set it all on fire.

But wait. Are we sure the story is that grim? Perhaps is there a Christmas Miracle lurking inside this tale?

It’s not just the good questions that Alyson Klein poses, legitimate though they are. There’s a much more fundamental issue: All we’ve seen are school-level proficiency rates. We have no idea whether the students in these “failing” schools have made significant progress. For all we know, the kids in these SIG schools could be making incredible gains. Maybe the five billion dollar investment has been worth every penny.

Do you think this feat is mathematically impossible? As Matthew Di Carlo has explained about a million times, and as I’ve explained once or twice myself, it’s all too feasible for a school to make big value-added gains while not showing any progress in its rate of students reaching the proficient level. Take a high-poverty middle school, for instance—a very typical SIG grantee. Every year it receives sixth graders who arrive reading and doing math at a third-grade level. Every year its amazing sixth-grade teachers help those students make two years of progress. Yet by the end of sixth grade, none of those students are proficient in reading or math. Is this school a failure? Or a huge success?

Let me state it as clearly as possible: The SIG analysis released by the Department of Education is completely worthless. Looking at changes in proficiency rates tells us virtually nothing about the progress (or lack thereof) of these schools.

So that’s good news for the SIG program, right? Its supporters can say, “The jury’s still out. We need more data. It’s too soon to judge.”

Not so fast.

Here’s the rub: Consider how those SIG schools were determined to be “failing” in the first place: Yup. Proficiency rates. Just as it’s possible that some of these schools are making great progress thanks to their SIG grants, it’s also possible (I would say likely) that some of these schools were never “failing” in the first place and were making great progress long before they were ever put on the “bad-schools” list. That’s because SIG eligibility was determined by proficiency rates. And since we know that proficiency rates are highly correlated with student poverty levels—and only weakly correlated with value added—all we’ve done is equate high poverty with low performance. We have no idea whether these schools are succeeding in educating their students or not—unless we look at individual student progress over time.

Nor have things gotten significantly better under Secretary Duncan’s waiver policy. A new paper by USC assistant professor (and Emerging Education Policy Scholar) Morgan Polikoff and his colleagues finds that many of the state accountability systems approved under the waivers continue to rely predominantly on proficiency rates instead of individual student progress over time. Consider this nugget:

One way in which proficiency rates remain a primary metric is in the identification of SIG schools (which were mainly chosen using proficiency rates) as priority schools in all states but Alaska, Wisconsin, and West Virginia. Other states, such as North Carolina, also use composite indices that are merely aggregate proficiency rates.

For sure, some—perhaps most—of these high-poverty, low-proficiency schools truly are crummy. They need a serious intervention—probably a do-over, or simply closure.

But some of these schools are probably doing great things for kids, as accountability guru Richard Wenning postulated this summer. The problem: Without looking at individual student data, we have no idea which schools they are. Double oops.

So here’s a modest suggestion: Let’s stop using proficiency rates to identify SIG schools, and let’s stop using proficiency rates to judge the success of the SIG program. In other words, SIG needs a do-over. A turnaround. A fresh start. Arne Duncan, Andy Smarick: Agreed?

-Mike Petrilli

This first appeared on the Fordham Institute’s Flypaper blog.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College