Checking NYC’s Facts
We take the essence of Eric Hanushek’s article (“Pseudo Science and a Sound Basic Education,” check the facts, Fall 2005) to be that existing means for appraising “adequate” school financing levels are, at best, inexact, and when in the hands of irresponsible advocates, often result simply in “junk science.”
Perhaps surprising to some readers, the principals of Management Analysis & Planning (MAP) concur with his thesis.
Indeed, in several articles published elsewhere, we have stated that in pursuing the doctrine of “adequacy,” the judicial system has far outstripped the capacity of social science to provide precise and verifiable answers.
How much does it take to enable a student from economically disadvantaged circumstances to achieve statespecified learning standards in mathematics, science, or language arts? What should high-school class sizes be? What are the characteristics of an effective teacher, one who achieves superior student outcomes?
Frankly, anyone who claims to know with certainty the answers to these crucial questions is usually an unabashed advocate for more money for schools and has little respect for policy analysis or research. We believe that qualified professionals who have demonstrated success providing instruction to diverse student populations can apply their knowledge to the discussion about what programs and processes are more likely to produce desired student outcomes. For that MAP offers no apologies. However, when professional judgment panel results are misused, that is indeed a problem, and it is in such circumstances Hanushek’s argument deserves attention.
MAP has never claimed, in any of its adequacy studies, to have determined the amount below which expenditures are inadequate. We always offer estimates for policymakers to consider given assumptions specified in the analytic and estimation exercises. We had no control over how New York referees would use the report and its findings. We share Professor Hanushek’s concern that the referees apparently relied on a bottom-line estimate without considering the assumptions and methodology underlying that estimate. Had they done so they would have discovered several assumptions and statistical procedures that served to inflate the estimate, but about which reasonable experts could disagree.
That said, Professor Hanushek’s thesis falls a bit shy, and here is why.
Professor Hanushek’s criticisms of the voodoo science of professional judgment and successful schools models of adequacy analysis, though justified when they are used to determine “the” number, lead then to reliance upon another analytic technique, cost-function analysis. But that, when applied by economists to the circumstances in New York, called for even more money than any of the professional judgment analyses specified.
In many ways, AIR/MAP (American Institutes for Research) figures were the median of all estimates if one includes those produced by William Duncombe and John Yinger of Syracuse. The latter use a cost-function analysis and claim to incorporate “efficiency” into their estimation models. Thus criticisms of the professional judgment studies without addressing cost-function estimates leave Hanushek’s arguments incomplete and open to even higher estimates by advocates relying upon econometric methods.
MAP acknowledges and, when provided with an opportunity, has always insisted that any infusion of additional school money should be accompanied by other systemic reforms, including stronger accountability for school failure with an emphasis on performance above all else, changes in antiquated systems (for example, single-salary schedules), and a stronger focus on evaluations to determine success and failure of programs and individuals. We realize that money by itself is not an answer to the nation’s education challenges.
Professor Hanushek does not declare how much money is the “right” amount for New York. That is good. He should not. No one knows the “right” amount. However, one can know the right process. The right process is to provide the political system with the best advice one can and then let the deliberative dynamics of representative government take hold.
James R. Smith
James W. Guthrie
Chairman, Management Analysis
and Planning, Inc.
Eric Hanushek replies: The MAP principals underscore a fundamental problem that runs throughout the use of costing-out studies, in New York and elsewhere. School finance policy is politics, in both the good and bad sense of the term. While science may provide guidance on some aspects of the problem, the costing-out studies are not scientific in the traditional sense. Even so, partisans in the dispute distort and misuse those studies under the banner of science. The authors, even with the best of intentions, unleash a process that they cannot control—and at least in this case, seem dismayed by.
As Smith and Guthrie point out, other methods have now found their way into the costing-out world, including the cost-function method. (Still another is the “state of the art” or evidence-based method.) Although they did not play a prominent role in the CFE judgment, such models have been introduced in a number of other state court cases and legislative deliberations. As Smith and Guthrie indicate, these alternatives are no better than the professional judgment or successful schools approaches—and may be worse in important dimensions.
It is clear, though, why costing-out studies are almost always commissioned by partisans in the school finance debates. Courts and legislatures are not going to make any nuanced use of them, but instead tend to extract a number, paint it as science, and use it for their purposes. Think alchemy, not science.
North Carolina Charters
Renowned pollster George Gallup once referred to data gathering this way: “Not everything that can be counted counts; and not everything that counts can be counted.” This comment is apropos to any discussion of charter school research, especially recent findings from Robert Bifulco and Helen Ladd (“Results from the Tar Heel State,” research, Fall 2005). Their study, sharply critical of North Carolina charter schools, is flawed and fails to “count” what matters most to parents and students.
The authors conclude that charter schools negatively affect performance, and that the public interest is not “well-served” by these schools. Consider, though, that only students who either entered a charter school after 4th grade, or exited a charter school before 8th grade, were included in their main analysis. This means that longer-term charter-school attendees (and, presumably, those students deriving the greatest benefit from these schools, since they stayed put) were excluded. In addition, the data used to assess performance came from state end-of-grade tests measuring knowledge of state curriculum—a seemingly obvious bias against innovative charter schools exercising their freedom to employ alternative curricula.
Bifulco and Ladd’s data also differ from recent Department of Public Instruction statistics. In 2004–05, 63 percent of regular North Carolina charter schools made adequate yearly progress under federal accountability guidelines, compared with just 58 percent of traditional public schools. Charter schools were also more likely to earn the label “school of excellence” than traditional public schools (33 percent compared to 24 percent).
And what about those intangibles that aren’t easily “counted”? Charter schools (and choice programs) empower parents—not school boards—with the freedom to select the best school for their child. In the final analysis, Bifulco and Ladd’s study demonstrates what parents have known all along: no one school can possibly meet the needs of all students, be it public, private, or charter. But charter schools do provide valuable and much-needed options, often to poor and disenfranchised families who cannot afford private school tuition. Doesn’t it make sense to let parents be the ultimate arbiters of whether their interests are “well-served” by charter schools?
Director, North Carolina
Bifulco and Ladd’s negative conclusion about North Carolina charters is much less certain than it appears. For instance, despite their finding that students in charters make less academic progress than students in regular public schools, enrollment in N.C. charter schools persists, and grows.
It’s also troubling that the negative effect of charters that they report depends upon charter age. My research shows that the apparent negative effect of charters in their third year, or older, was small—.01 to .03 standard deviations—and in some instances insignificant. I also find that the negative effect varied by grade level: larger for 3rd through 5th graders and statistically insignificant for 6th through 8th graders.
The authors also tout their method as addressing the problem of self-selection. It does, but at a cost. The method can mislead unless we understand why students enter charters and why students leave. Some students enter because they are having difficulty in the regular public schools, difficulty that continues to affect them, even worsen, while attending charters. As for students leaving charters, charters may simply not be a good fit for everybody. No doubt some students leave Ivy League colleges and do better elsewhere. Does that mean the Ivy League schools are teaching poorly?
Finally, the effect of charters on students who leave is likely a worst-case estimate; it should be complemented by one comparing the academic growth of students who stay in charters with the effect of students who stay in regular public schools. My research finds that math-score growth for North Carolina students who stay in charters is not significantly different from students who stay in regular public schools; reading-score growth is higher, significantly so, for students staying four or five consecutive years.
Associate Professor of Economics
North Carolina State University
Bifulco and Ladd reply: We agree with Ms. Kakadelis that test scores don’t count for everything. At the same time, they clearly matter, especially in a state such as North Carolina that has long had a statewide course of study and a set of state tests aligned with that curriculum. Given that state taxes are used to pay for charter schools, the public has a valid interest in the extent to which the students in those schools are meeting state achievement goals.
Like Kakadelis, we support the idea of more schooling options, especially for students in low-income families. We differ, however, in wanting those new options to be as effective in promoting student achievement, at least on average, as the traditional public schools. Our study indicates that North Carolina charter schools are not meeting that standard.
Newmark, whose own detailed study of North Carolina charter schools also finds negative achievement effects, suggests our results are misleading because students who choose charter schools may be on a downward achievement trajectory before they switch to a charter school. The full version of our paper reports an additional test to rule out this hypothesis.
Sol Stern’s “An Education Revolution That Never Was” (forum, Fall 2005) is neither forthright about education in New York City nor informed about education generally, as his use of sources and data makes evident.
He quotes a teacher alleging that she was punished for asking “uncomfortable questions” during training, for example. He neglects to mention the thousands of teachers who applaud our professional development. Nor does he mention that we have had more teacher applications than ever before.
On our alleged unwillingness to use phonics-centered reading programs, Stern quotes the developer of one such program whose $27 million contract with the city was discontinued. Stern notes that this reading program helped raise scores in the city’s lowest-performing schools in the 1990s. But he fails to mention that the program was part of an intensive effort to improve these schools that included a vast infusion of additional resources, including teachers for class-size reduction, capping of enrollment, coaches, administrators, and restructuring—elements of our reform for all schools. And Stern doesn’t take issue with improved test scores unless they are our scores—which represent the greatest gains in memory.
Stern’s fixed idea about the teaching of phonics obscures the central point in our curriculum reforms: phonics is essential to reading instruction, but it is not the only essential skill. Anyone who has taught children knows this at least intuitively. As even a brief visit to our web site indicates, the New York public schools offer a rich and diverse menu of reading programs and interventions for its students. We tailor offerings to a student’s needs: not all children need a fortified diet of phonics, but those who do, get it.
Stern also fails to mention how some of our early critics are now our supporters—experts who criticized our phonics component in 2003 (his sole point of reference) have been partners and advisors in the intervention strategies we have designed with help from special-education consultant Dr. Eileen Marzola.
We value the experience and the different strengths of our teachers. We understand the potential and the strengths of our students. That is why we have a curriculum and interventions that allow for nuance, creativity, critical thinking, and academic rigor. The results show it.
Deputy Chancellor for
Teaching and Learning
New York City Department of Education
Sol Stern replies: Ms. Farina writes that “thousands of teachers applaud our professional development.” How does she know? The DOE has never asked the teachers for their opinion. She claims that the Success for All phonics program was only “part” of the reason for academic improvement in the lowest scoring schools. Again, how does she know? Her department didn’t study SFA’s relative contribution before it recklessly ditched the program. She says some “early critics” have become supporters, but only cites someone who’s on the DOE payroll. All of which highlights one of the main points of my article: namely, that we need an independent research agency to evaluate DOE operations and claims of success. Otherwise, all we have is spin from a mayoral agency dedicated to getting the boss reelected.
Private Schools for the Poor
James Tooley (“Private Schools for the Poor,” features, Fall 2005) reports widespread existence of private schools in five poor countries—India, Ghana, Kenya, Nigeria, and (to a lesser extent) China—and addresses two common “myths” about such schooling: that “private education for the poor does not exist” and that “private education for the poor is low quality.”
Evidence for debunking the first myth already existed nearly a decade ago. Several studies found that even the poorest households in India and Pakistan use private schools extensively. Tooley’s contribution is to extend the South Asian evidence to countries in Africa and to test it in communist China.
His findings also add to the evidence base on the second myth: namely, that, in several developing countries, private school students outperform their public-school counterparts after controlling for schools’ student-intakes. Thus the article helps to build a fuller picture of private and public schools for the poor in developing countries and adds to existing knowledge.
However, I believe that Tooley’s concern that public intervention crowds out private initiative in education is not well placed. First, the evidence adduced for such a trade-off is weak. Second, surely one should not lament parents’ abandonment of private for public schools when fees are abolished in the latter. It is a welfare-maximizing choice parents make in light of information about their circumstances, which is far more information than that available to the analyst.
While agreeing with Tooley that private schools tend to provide better-quality education (as I also found in a 1996 study I conducted), I would be more cautious and nuanced about the policy implications. Tooley advocates reform programs that support private initiative with government support, such as voucher schemes and charter schools. From colonial times, India has used a charter-like system of publicly funded, privately produced education; such private-public partnerships are called “aided” schools. However, evidence from Uttar Pradesh, India, suggests that such schools are just as ineffective and poorly resourced as public schools.
Thus private ownership per se may not be the key. Other attendant factors are also crucial: whether there are performance criteria or incentives built in to the formula for government aid (there are none in India); how much central oversight the government provides (a great deal in India); and to what extent the government is able to resist demands from aided-school teachers to be granted the same employment and other arrangements as those existing for public school teachers.
In Uttar Pradesh, militantly organized aided-school teachers’ unions have demanded and successfully obtained treatment comparable to that received by public school teachers. Over time, aided schools have become very similar to public schools in terms of level of teachers’ salaries, school resources, centralized administration, salary disbursement, teacher appointment procedures, and student achievement outcomes. Aided-school teachers are no longer locally accountable. This example encourages caution in assuming that private delivery of publicly funded education is a panacea. A good deal of thought may be necessary about the design of incentives in the grant to such schools and on how to keep “private”truly private by resisting demands from vested interests to make such schools more like public schools.
Research Officer, Centre for
the Study of African Economies
University of Oxford