The Concern about Subgroups in ESSA Accountability Systems May Be Overblown
A recent analysis by uber-wonk Anne Hyslop and her colleagues at the Alliance for Excellent Education adds to a long list of reports expressing concern that many states’ accountability systems are turning a blind eye to the performance of disadvantaged students and students of color. The analysis finds that, under the Every Student Succeeds Act, “Many states fail to include student subgroups meaningfully across two of the law’s most important accountability provisions: (1) school ratings and (2) the definitions used to identify schools for targeted support and improvement.”
On its face, it’s a reasonable worry. One of the few popular aspects of the original No Child Left Behind act was the underlying principle that we shouldn’t rely on average test scores to determine the quality of a school; it’s important to consider how its vulnerable subgroups are performing too. Surely federal policymakers didn’t intend to sweep such issues under the rug when they enacted ESSA, as Democrats on Capitol Hill and their allies in civil rights groups have been arguing.
Yet I’ve long suspected these concerns to be overblown—because of basic math. If a subgroup was large enough to be counted under a state’s accountability system, I reasoned, it would be large enough to drag down its school-wide grade, too. As a result, schools with poor performance for subgroups but high grades overall would be quite rare—unicorns, in effect.
New data from our home state of Ohio allow us to test this hunch empirically. To keep it simple, let’s pretend that Ohio’s school grades rely entirely on how well schools perform on helping individual students make progress over time, or “value added.” (This is not so different from what my ideal accountability system would look like, but alas, in the real world, Ohio’s is much more complicated.)
Here’s the key question: Are there many schools that do well on value added overall but where disadvantaged or black students fare poorly when it comes to making annual progress?*
The answer: no. Thanks to some number crunching by my colleague Aaron Churchill, we can see that there are less than twenty such schools—in a state with over 3,000! See the highlighted boxes below.**
Table 1. Value-added results in English language arts: All students versus economically disadvantaged students
|Economically disadvantaged students|
Table 2. Value-added results in English language arts: All students versus black students
Table 3. Value-added results in math: All students versus economically disadvantaged students
|Economically disadvantaged students|
Table 4. Value-added results in math: All students versus black students
Do the math and you find that less than 1 percent of schools in Ohio get A’s or B’s overall for value added but D’s or F’s for economically disadvantaged students or black students. I’m not going to lose sleep over that. But maybe you think a “C” grade overall is too high for schools with D’s for F’s for subgroup performance; such schools amount to less than 3 percent of the total.
These findings might only apply to Ohio; perhaps there’s something about its school attendance patterns or value-added measure that drive these results. I doubt it, but analysts should test the question empirically in other states.
And then they should stop fretting about this particular aspect of state accountability systems. After all, given the state of the world today, there’s plenty of other things to worry about!
— Mike Petrilli
Mike Petrilli is president of the Thomas B. Fordham Institute, research fellow at Stanford University’s Hoover Institution, and executive editor of Education Next.
This post originally appeared in Flypaper.
** – Note that, while the state assigns A-F grades on a variety of metrics, it does not assign such ratings in its subgroup value added reporting; for readability we translated the “raw” value-added data into ratings using the state’s grading scale for schoolwide value—added ratings.