Donkey in Disguise

Jack Jennings and the Center on Education Policy


Checked (all titles published by the Center on Education Policy):

From the Capital to the Classroom,
Year 1
(January 2003)
Year 2
(January 2004)
Year 3
(March 2005)

State High School Exit Exams series:

A Baseline Report (August 2002)
Put to the Test
(August 2003)
A Maturing Reform
(August 2004)

States Try Harder, but Gaps Persist:
State High School Exit Exams 2005
(August 2005)
Pay Now or Pay Later: The Hidden Costs of High School Exit Exams
(May 2004)

School Vouchers: What We Know and Don’t Know … and How We Could Learn More (June 2000)

Do We Still Need Public Schools? (1996)

Checked by Greg Forster

With the passage of the No Child Left Behind Act (NCLB), the proliferation of high school exit exams, the success of school choice initiatives, and a dozen other smaller if more bitter battles, education has become one of the hottest policy topics in Washington. That means there’s a booming market for education experts, especially those who claim to speak with the disinterested voice of reason among the gaggle of partisan squawkers and interest groups. Jack Jennings, a one-time king of Capitol Hill education policy and now head of the Center on Education Policy (CEP), is one such expert.

Jennings and the CEP (he founded the organization in 1995) provide research and expert opinion on a variety of education issues. Jennings is one of the mainstream press’s favorite go-to guys on education. He and the CEP appear frequently in the New York Times and the Washington Post commenting on education issues and are variously described as “nonpartisan” (Times, January 27, 2004; March 14, 2004; Post, March 16, 2004), “nonprofit” (Times, August 18, 2004; February 7, 2006), “a research group” (Times, April 4, 2005; May 11, 2005), and “independent” (Post, February 19, 2004; August 29, 2004; March 24, 2005). The Posts David Broder called the CEP “an independent advocate for more effective public schools” (March 13, 2004). And on March 26 of this year, the Times turned over its most valuable piece of real estate—two columns on the top of the Sunday front page—to Jennings and CEP to announce, two days before it was even released, a CEP study on NCLB’s impact on curriculum, again calling the organization “nonpartisan.”

The media seem to see Jennings and the CEP as the voices of education research and reason, an enviable position at a time when nonpartisans are hard to come by. Jennings uses this highly desirable media perch to promote findings that he says are the result of empirical research conducted by the CEP. He says, for instance, that NCLB is too strict and is underfunded, that its more controversial requirements are unworkable and should be scrapped, that only big new state spending can help kids pass exit exams, and that school choice is unproved and dangerous. Is this nonpartisanship or something else?

Accidental Social Science

Jack Jennings wasn’t always a professional “independent,” “nonpartisan” researcher. In fact, for the better part of three decades (from 1967 to 1994) he was one of the most powerful education policymakers on Capitol Hill, as a Democratic staffer for the House Education and Labor Committee at a time when the Democrats completely controlled the House. His influence over federal education policy was enormous: he worked on every major education bill that went through Congress in those years. It is not surprising that education journalists would still turn to a prominent policymaker like Jennings for quotations. But how does a lifelong partisan congressional staffer change his spots and become a disinterested professional researcher who follows the evidence wherever it leads?

The answer, in this case, is that he doesn’t.

One of the CEP’s most important publications, for instance, is an annual study of the effects of NCLB and its associated rules and regulations, “From the Capital to the Classroom.” These annual volumes make assertions about empirical facts (“students’ scores on the state tests used for NCLB are rising”; or “lack of capacity is a serious problem that could undermine the success of NCLB”) and provide policy recommendations (“some requirements of NCLB are overly stringent, unworkable, or unrealistic”; “the need for funding will grow, not shrink, as more schools are affected by the law’s accountability requirements”). On March 24, 2005, the Post carried a wire report hailing last year’s CEP study as “the most comprehensive review of the three-year-old No Child Left Behind law.” The story opened with the claim: “States will not come close to reaching all the struggling children unless the government spends more and lightens demands, according to an independent analysis.”

Education professionals looking for a detailed review of the policies, procedures, and regulations used under NCLB will find much that is useful in these annual studies and other CEP publications on NCLB. The CEP has also debunked some of the more intemperate claims about NCLB that have arisen from both sides of the political aisle.

The trouble with the studies is that they do not gather data about the issues they purport to examine. The authors have read large volumes of legal, regulatory, and administrative documents related to NCLB. This puts them in a good position to know, in detail, exactly what policies are being set. However, the CEP claims to be studying not what the policies are, but how they are implemented and how they are affecting education. To examine implementation the CEP relies exclusively on surveys of state education officials and interviews with public school staff. In other words, CEP researchers report as facts what the public school system says about how things are going in the public school system.

The CEP does not hide its methods. The studies quite openly attribute their findings to surveys and interviews. Phrases like “officials told us that…” and “according to the teachers we interviewed…” appear here and there. Nonetheless, both the studies themselves and Jennings’s public comments about them present the findings as scientifically confirmed facts, not merely as the claims made by public school officials and staff. They rely on the accuracy of these claims as a basis for policy recommendations.

Survey and interview data need not be dismissed across the board as unscientific. A survey using scientific methods—such as random sample generation and, where appropriate, credible assurances of anonymity for interviewees—can produce legitimate empirical data on many subjects. But the CEP is investigating the merits of an accountability system by asking the opinion of the institution that is being held accountable.

The CEP also claims that its annual NCLB reports include “case studies” of numerous districts. Use of the term case studies creates an impression that some kind of scientific data gathering was involved in at least some localities. But at the back of the study, we find out that the case studies are simply a more extensive set of interviews, including site visits to conduct in-person interviews. “Other research” was involved, though not identified or described, and there is no sign that it involved any independent collection of data.

What the CEP does when it studies NCLB implementation is barely distinguishable from public relations work. It ascertains the public schools’ party line and then broadcasts it as fact.

Collaborative Union Leaders Get Lauded—and Unseated

Before founding the Center on Education Policy
in 1995, John F. (Jack) Jennings had a career that would be the envy of any loyal Democrat. Growing up with a “typical ethnic Catholic background” in Chicago in the 1950s, as he told Education Week in a 1994 profile, his early career choices were limited to “one of the three Ps—priest, politician, or policeman. My mother wouldn’t let me be a policeman, so I chose the seminary.” But Jennings gave up the idea of becoming a priest
and enrolled at Loyola University, where he got an early taste of the third P, joining the College Democrats. Later, while studying law at Northwestern University, he worked as a precinct captain for the local Democratic ward committeeman, Representative Roman Pucinski, who was elected to Congress in 1959.

Pucinski brought the young Jennings to
Washington in 1967 and gave him a job as staff director of the Education and Labor Committee’s Subcommittee on Elementary, Secondary, and Vocational Education. In 1973 Jennings was promoted to associate counsel of the subcommittee by Representative Carl Perkins (D-Ky.) and soon became the full committee’s general counsel. During his long tenure, Jennings saw his influence grow substantially. His thumbprint is on nearly each major piece of education legislation
from the Elementary and Secondary Education Act to the Higher Education Act
passed or reauthorized during the years when Democrats controlled Capitol Hill. “Colleagues and lobbyists describe Mr. Jennings as at once disarming and demanding,” commented
Education Week at the end of Jennings’s 27-year Capitol Hill career, “a consensus builder and a partisan.”

— Peter Meyer

Partisan Assumptions, Predictable Conclusions

The party line has been fairly consistent: public schools support the goals of NCLB, but they need more money and more “flexibility” from Washington to accomplish these goals. Sure enough, the CEP consistently finds that NCLB’s goals are laudable and that schools are refocusing their efforts on raising achievement among disadvantaged groups. This finding is always followed by claims that schools need lots more money to produce real improvement and that the most challenging NCLB requirements are unreasonable and need to be relaxed.

In the foreword to the first report in the series “From the Capital to the Classroom,” Jennings compares NCLB to the 1965 Elementary and Secondary Education Act (ESEA). “Some critics of the original ESEA say that it failed because it provided money without accountability, and that the NCLB Act will succeed because it requires strict accountability,” he wrote. “The ESEA of 1965 may have offered money without
much education accountability, but the NCLB Act demands heavy accountability without much greater federal financial and technical assistance—an approach no more likely to succeed.”

The trouble with this reasoning is that the school system still has all the extra resources we have been pouring into it since 1965. In other words, schools now have both very high spending levels and accountability. Isn’t this exactly the combination Jennings says he wants?

The dynamic that’s really at work in these studies becomes particularly clear as the studies are read in succession. The first came out in January 2003, when state fiscal crises had been in the headlines. The CEP found, “The fiscal crisis in most states, coupled with the prospect of limited additional federal aid, could threaten the successful implementation of this very ambitious law.” In 2004 and 2005, the CEP made no more mention of fiscal crises, but left unchanged its finding that states can’t implement NCLB without large spending increases. The later studies also acknowledge that NCLB is producing academic improvements, yet the schools still need more money and relaxed requirements. “Some supporters of the Act contend that early gains in state test scores mean that the Act is being administered effectively and is succeeding, so no changes are needed. Our intensive study of the Act for the last three years leads us to disagree.”

These studies’ factual claims and policy recommendations, which are their main purpose, have no scientific basis. They should be taken for what they are: the public school system’s party line, not valid empirical research.

Quality Babysitting: Teaching Costs Extra

The CEP’s other major line of research concerns high-school exit exams, on which it also produces an annual study, titled “State High School Exit Exams,” and other publications. As with its NCLB research, the CEP’s work on exit exams has useful aspects, including detailed information on state policies and regulations. Again, this will satisfy the wonks. The CEP has contributed reasonable discussion on secondary issues such as giving students who fail exit exams alternative ways to prove their academic proficiency. And it has been commendably principled on the question of whether exit exams increase dropout rates: The 2003 edition of its annual study snidely commented, “Exit exams are certainly not helping to keep students in school.” However, as subsequent research produced new evidence that exit exams do not in fact increase dropout rates, the CEP moderated its stance to one of reasonable agnosticism.

But the exit exam studies are no more scientific than their NCLB cousins. In 2004 the CEP published its largest study of these exams, “Pay Now or Pay Later: The Hidden Costs of High School Exit Exams.” The title sums it up: schools need more money to cover the “hidden” costs of living up to the expectations set by exit exams.

The CEP’s central thesis is that while exit exams may appear to be inexpensive, they actually create the need for new spending by imposing on schools the burden of educating students to pass the test. “The direct costs of developing and administering the tests themselves represent a small share of the total costs of implementing a mandatory exit exam policy,” says “Pay Now or Pay Later.” “A realistic estimate of exam-related costs must also take into account the costs of remediation for students who fail exit exams or crucial state tests in earlier grades, as well as the ‘hidden’ costs of services needed to give students a substantial chance of passing these tests.”

The assumption here is that exit exams impose new responsibilities on schools. In fact, exit exams are only holding schools accountable for teaching basic skills, always assumed to be their primary responsibility. The CEP view implies that schools are a babysitting service. You pay them $9,000 a year for 12 years to watch your child during the day. Teaching reading and math is an extra service, and it costs more.

Professional Judgment: Spend More

In “Pay Now or Pay Later,” the CEP calculates the alleged hidden costs of exit exams using what is known as the “professional judgment” method. It assembled panels of educators and asked them what education services, in their professional judgment, a typical school district would need to reach two benchmarks: the current level of student performance on exit exams and a higher level of student performance that represents a desired goal. The CEP then calculates the costs of providing those services.

The professional judgment method does not rely on empirical data gathering or analysis of actual budgets. It is analogous to the method the CEP uses to study NCLB: ask the system what it thinks it needs, then report that figure as the amount needed. The calculations produced by the professional judgment method are more or less just speculation: expert speculation, to be sure, but speculation. Even the cost estimates for achieving current outcomes are speculative; the CEP asks its panels of experts to judge what “a hypothetical average school district” would have to spend to produce the current outcome levels.

The problem is that the experts have an overwhelming incentive to inflate their cost estimates, even if only unconsciously. It is only natural for education practitioners to believe that exit exams entail the need for lots of new, additional education services. And education practitioners know that higher cost estimates for complying with exit exams will produce a political impetus to spend more money on education practitioners.

One justification the CEP offers for the professional judgment method is that it “best reflects the experiences of people who are actually responsible for delivering education services” and “reflects the views of actual service providers.” But that is exactly the problem: because they are the “actual service providers,” the experts sitting on these panels cannot help but be affected by the financial incentives they face as employees of the school system.

Another justification the CEP offers is that the professional judgment method is “the most commonly used” for studying education resource needs. Unfortunately, this is all too true. Following the proliferation of “adequacy” lawsuits, studies using the professional judgment method to calculate how much it costs to provide an adequate education have become an explosive growth industry. (See “Pseudo-Science and a Sound Basic Education,” check the facts, Fall 2005). The reason adequacy studies tend to prefer the professional judgment method is clear enough, and it does not seem like a stretch to suggest that the CEP might prefer that method for the very same reason.

“Pay Now or Pay Later” also states that “the cost estimates appear to be reliable within approximately 5–10 percent.” How do they know this, given that their method involves no empirical data collection? “Panels in the same state with the same task but different members and a different moderator on a different day will often differ from the average by between 5 and 10 percent.” In other words, the expert panels all reach results that are similar to one another, so therefore the results are “reliable.”

Looking the Other Way

In the end, too many of the CEP’s publications reflect poorly on the organization’s ability to treat controversial issues fairly and face the empirical evidence squarely, the very qualities that induce major media outlets to seek Jennings and the CEP for their opinions.

It is no surprise, then, to discover that the CEP’s take on school choice is as compromised as its views on other hot-button education issues. It does not produce a big annual study on school choice, but early in the debate it released a review of the existing research, “School Vouchers: What We Know and Don’t Know … and How We Could Learn More.” The review found the evidence on vouchers to be “inconclusive,” a result achieved only by throwing out all research on privately funded voucher programs, then declaring that the rest of the research produced “varying findings.” In fact, there have been seven scientifically valid random-assignment analyses of voucher programs, and all seven found either that all voucher students perform significantly better than their nonvoucher contemporaries, or at least that most of them do (in some studies the results for black students, the majority of participants, are positive, while the results for other students fail to achieve statistical significance). There is room for legitimate discussion on the limits of the existing research on vouchers, but to describe the research as “inconclusive” is a gross misrepresentation.

The CEP has even published a history of public schooling, titled “Do We Still Need Public Schools?” that does to history what its other publications do to current events. For example, the report uses quotations from the American founders about the importance of education to suggest that the founders were “early supporters of public schools,” which they were not. Public schools as we know them today didn’t exist at the time, and the historical record makes clear that most of the founders would not have supported a government-owned and government-run school system.

If Jack Jennings and the Center on Education Policy want to publish their opinions and call them “research,” that’s their right. But social scientists, commentators, and journalists have a responsibility to distinguish between unfounded opinions and serious empirical research and to warn people when a study doesn’t adhere to scientific standards. There’s no hope for improving education policy if we don’t keep the facts and evidence distinct from the public-school system’s party (and often partisan) line.

-Greg Forster is a senior fellow at the Milton and Rose D. Friedman Foundation.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College