School Accountability and the Infinite Information Problem

President-elect Donald Trump’s selection of Betsy DeVos as Secretary of Education has renewed the debate about public accountability in school-choice programs. Some reformers would prefer relatively little government interference in which private schools participate in these programs and which schools families choose. Other reformers want a relatively high degree of government oversight, believing this is the surest strategy for generating strong student outcomes. Smart state accountability systems can produce important benefits, but we should also recognize the government’s limited ability to collect, analyze, and make use of the extraordinary amount of information relevant to school quality and family preferences.

There are obvious dangers associated with centralizing authority in vast, potent government bodies: It prevents communities from living according to their particular beliefs, inhibits continuous small-scale course corrections; relies on clumsy, expensive administrative units, and so on.

ednext-dec2016-blog-smarick-information-centralizationBut two recent publications describe two other hazards too seldom discussed. Both relate to the hubris undergirding centralization, and both recall Friedrich Hayek’s seminal 1945 article “The Use of Knowledge in Society.” Hayek argues it’s impossible to create and maintain well functioning, domineering central authorities because they will always lack the capacity to collect, analyze, and act on the infinite and ever-changing body of relevant information.

Instead, we should decentralize authority among citizens, neighborhoods, community-based groups, firms, and others. He wrote, “The knowledge of the circumstances of which we must make use never exists in a concentrated or integrated form, but solely as the dispersed bits of incompletely and frequently contradictory knowledge which all the separate individuals possess.”

Using this lens, we can spot that centralization efforts often attempt to get around the infinite-information problem in one of two ways: Employ strategies to simplify and manage the Everest of information, or create a state apparatus that promises to cope with its consequences.

Cathy O’Neil’s new book Weapons of Math Destruction explores how institutions are increasingly using “big data” to simplify. Algorithms and mathematical models now shape a big part of our lives — and not only Amazon’s personalized advertisements. Formulas are used to calculate user-specific car-insurance rates, sort résumés, assess loan applications, predict recidivism rates, and more.

But such systems pose major risks. As the Wall Street Journal noted in its book review, we should be wary of “the biases of people who encode their notions in algorithms.” Worse, the details of the models are typically opaque to both those being assessed and those using the assessments for high-stakes decisions. As O’Neil told NPR, “we really have no idea what’s happening to most algorithms under the hood.” In K-12, we’ve seen this very debate rage related to “value-added” models of student performance.

O’Neil argues these models can have discriminatory effects, exacerbating racial, economic, gender, and geographic inequalities. But we should also ask a more fundamental question: whether it’s ever possible to create a dispassionate algorithm capable of driving fair, high-stakes governmental decisions. Though a model will generate an ostensibly empirical, trustworthy “answer,” it will always be a mechanical product of a formula and inputs. That carries the possibility of false precision. Perhaps some important considerations couldn’t be quantified and were omitted from the formula; perhaps there’s major disagreement over how to weight relevant factors; perhaps the “answer” responds to a question different than the one asked by the policymaker.

Even more importantly, though, the mindset behind algorithm-based decision-making can be an uncomfortable fit with democracy and pluralism. We want individuals and their representatives to feel independent agency over key choices, and we want them to feel free to use their communities’ particular values. A heated political process, not a cold formula, is probably the best way to litigate and adjudicate competing claims.

Interestingly, Hayek forecast most of these issues. He pointed out that there is “very important but unorganized knowledge” particular to individual circumstances, that some of it “cannot enter into statistical form,” and that some experts regard such knowledge “with a kind of contempt.” Indeed, James Scott’s anti-technocratic masterpiece Seeing Like a State argues that complexity, practitioners’ craft, time-tested wisdom, and local custom are some of the greatest casualties of centralization. In total, algorithm-based decision-making pulls off much of its magic through the sleight-of-hand of simplification.

A different way for the state to try to address the infinite-information problem is to develop a colossal government apparatus. Peter J. Wallison’s new National Affairs article, “Decentralization, Deference, and the Administrative State,” explains why this is so perilous. Big-government opponents often focus on its expense and power to compel, but Wallison argues it can be undemocratic and unconstitutional: Regulatory agencies are empowered to make crucial policy decisions, craft rules with the weight of law, and implement behemoth programs. Accordingly, unelected officials inhabit a space between the executive and legislative branches and wield enormous, hard-to-check authority.

Wallison notes that Dodd-Frank authorized almost 400 new regulations to be enforced by at least eight different agencies. Six years on, a quarter of those regulations have yet to be adopted. So new, undefined government powers lie dormant, just waiting to be awakened by some government bureau. The scope of this authority is remarkable: After the New Deal, the Code of Federal Regulations had about 18,000 pages; today, it’s 175,000 pages.

In my view, Wallison seems to hit upon a real-life example of one of Robert Nozick’s famous criticisms of John Rawls’ A Theory of Justice. Nozick argued that a Rawlsian understanding of justice would require a nearly omnipotent state to meddle incessantly. Maybe state centralization inevitably begets executive-branch agencies with expansive legislative-like powers, including, eventually, substantial latent authority that can be animated as needed.

In recent years, a number of scholars have written about the administrative state’s sprawl, including books like Phillip Hamburger’s Is Administrative Law Unlawful? and Charles Murray’s By the Peopleand articles like Steven Teles’s “Kludgeocracy” and Philip Wallach’s “The Administrative State’s Legitimacy Crisis.” One of Wallison’s major contributions is adding to this mix that a consequence of state centralization may be an extra-constitutional role for federal agencies that only courts can check.

The overarching point is that both books show that the government’s decision to centralize doesn’t imply a solution to the infinite-information problem. It only reveals the state’s presumptuousness.

– Andy Smarick

Andy Smarick is a resident fellow at the American Enterprise Institute (AEI),  president of the Maryland State Board of Education, and a visiting scholar at the Johns Hopkins School of Education.

This first appeared, minus its introduction, on AEIdeas.

Last Updated


Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428

For subscription service to the printed journal
Phone (617) 496-5488

Copyright © 2024 President & Fellows of Harvard College