As readers know, I’m prone to lamenting that so much school reform represents a triumph of wishful thinking over common sense. As Mike McShane and I put it recently:
Education suffers from a curious malady. It is a field marked by passionate commitment, urgency, and high hopes. These are wonderful things. But they have also left many policymakers, reformers, philanthropists, and system leaders inclined to look always forward, confident that the next program or reform will be the one that delivers for kids. This assurance is an admirable quality, a healthy and wholly American optimism. But it can leave us lacking in perspective . . . As a result, we tend to do a poor job of learning from the missteps and miscalculations that have gone before.
Education is further saddled with a lack of institutional memory. Reform groups rely on hard-working cadres of impassioned twenty-somes. Washington and the statehouses are filled with advocates and staff who do education for a little while before moving on to broader portfolios. Foundation executives are in place for a few years and then move (or get moved) on to the next thing. The most influential academics are focused on data sets and methodological tools, not the rhythms and currents of reform.
A consequence is that, time and again, seemingly sensible people embrace nifty reforms, hot new superintendents, and “miracle” school systems . . . and, only much later, realize that the packaging was a whole lot better than the product. Even two decades into the “accountability era,” it’s far too easy for snake-oil peddlers to trumpet a few numbers and insist that a given school, district, or supe has “cracked the code.” This is a problem any time, but doubly so when one realizes how shaky some of the underlying numbers really are.
Over at RealClearEducation, Connor Kurtz and I recently used last winter’s D.C. graduation rate kerfuffle as an excuse to recall a few of the celebrated “model” districts that ultimately turned out to be as much a product of data manipulation and marketing as of instructional improvement. I’d encourage you to check out the RCEd piece but, to keep things brisk, I’ll just point here to two go-rounds in Atlanta:
In the early 1980s, Atlanta schools seemed to have turned a corner. Under the leadership of Alonzo Crim, the city’s test scores showed remarkable gains. A New York Times headline proclaimed Atlanta an example of “urban education that really works” and termed its test gains “undeniable.” The president of the Carnegie Foundation for the Advancement of Teaching said Crim brought “stability and sustained credibility” to Atlanta’s schools. In 1984, Harvard University gave Crim an honorary doctorate and labeled him “a wise and perceptive schoolman.”
Well. It later turned out that Crim’s secret sauce involved having Atlanta schools evaluate lagging students by using tests from lower grades—instead of those from their actual grade levels. When Georgia ordered Atlanta to assess students based on their actual grade level, the gains evaporated. Today, the Crim era is remembered as an embarrassing chapter in the history of test manipulation.
And, two decades later, Atlanta went through this all over again:
In the late 2000s, Atlanta was being feted once again. In 2009, superintendent Beverly Hall was named national superintendent of the year by the American Association of School Administrators, which said that she “represent[ed] the ‘best of the best’ in public school leadership.” In 2010, Hall won the American Educational Research Association’s Distinguished Public Service Award for using “education data” as a tool “for promoting school reforms and increasing student achievement.” Then, in 2014, the Atlanta Journal-Constitution found that some schools’ test gains bordered on the mathematically impossible. Investigators uncovered systematic cheating by teachers and district staff. In the end, 11 Atlanta Public Schools administrators and teachers went to jail. Hall faced up to 45 years in prison for alleged criminal racketeering at the time of her death. “It’s like the sickest thing that’s ever happened to this town.” Georgia state representative Ed Lindsey lamented, “We were so enamored with the perception that we didn’t see the reality.”
Many such tales can be told, and some of the malfeasance is remarkable. But that’s not the whole story. The other part is that too many analysts, advocates, funders, and reporters behave like star-struck fanboys while all this is unfolding. Once a “model” district narrative gets going, it draws the notice of deep-pocketed funders and influential political leaders. It wins prizes and generates puff pieces. As the momentum builds, there are a lot of incentives for advocates and analysts to avoid getting crosswise with the politicos and philanthropists—and a lot of professional upsides to getting on the bandwagon.
The result is that districts get heralded as islands of possibility, superintendents are honored as paragons of leadership, and hard questions wind up going unasked. Here’s a plea: The next time any of us are inclined to celebrate a “model” district or program, let’s bring a skeptical mind to reported results and dog-and-pony school walkthroughs. Especially until we see results given a hard audit, ask all the awkward questions, and really understand how and why this plays out in practice, let’s work harder to balance enthusiasm with professional detachment.
— Frederick Hess
Frederick Hess is director of education policy studies at AEI and an executive editor at Education Next.
This post originally appeared on Rick Hess Straight Up.
Last updated October 16, 2018