Popular college “opportunity” measures lead to flawed conclusions
Contact | Jackie Kerstetter: email@example.com, EdNext Communications
Popular college “opportunity” measures lead to flawed conclusions
New measure allows universities to judge recruitment efforts against their own missions and circumstances
January 25, 2019—Concerned that too few economically disadvantaged Americans earn college degrees, federal lawmakers have proposed ranking colleges and universities by the percentage of low-income students they enroll using popular measures of “opportunity”—something national media outlets have already begun to do. In a new article for Education Next, Caroline Hoxby of Stanford University and Sarah Turner of the University of Virginia demonstrate that these popular measures are flawed, confounding differences in universities’ recruitment efforts with differences in their circumstances, and offer a new, more accurate method to gauge a university’s success in providing opportunities to low-income students.
Popular opportunity measures such as the share of students receiving federal Pell grants (Pell Share), the share of students from the bottom 20 percent of the national income distribution (Bottom Quintile), and the Intergenerational Mobility measure compare enrollment to national averages, failing to take into account differences between institutions’ available pools of potential students. These methods can penalize institutions that are actually succeeding in providing opportunities for low-income students and, perversely, reward institutions that are much less successful. Hoxby and Turner propose a new method to compare an institution’s enrollment to its “relevant pool,” the pool of students from which it could plausibly draw based on its academic mission and geographic location. Using comprehensive income and achievement data, they construct the relevant pools for several universities to make the following points:
The popular measures often reward institutions for their circumstances, especially negative circumstances, rather than rewarding them for their effort. For example, if the University of Maine and the University of Connecticut served every student in their state who met their academic standards, Maine would draw 22 percent of students from families with incomes below $40,000 compared to 10 percent at the University of Connecticut. The University of Connecticut would be slated for penalties because it faces an income distribution with a high average. The University of Maine would be rewarded because average incomes in its relevant pool are low.
Ironically, the increasingly popular Intergenerational Mobility measure—which is intended to reflect the share of a university’s students who are from the national bottom quintile and who end as adults in the top quintile—penalizes universities twice over for facing a relevant pool with high income equality. The Intergenerational Mobility and other popular measures reward universities for facing a relevant pool with high income inequality. For instance, the University of Wisconsin is due for penalties because its pool has unusual income equality. This equality means that its students are unlikely to come from national bottom quintile and, also, are unlikely to end up in the top national quintile. Conversely, the University of California campuses are slated for rewards because their pool is unusually unequal, with a disproportionate number of ultra-poor and ultra-rich.
The popular measures can get it so wrong that universities that are disproportionately successful at enrolling low-income students end up with low rankings on the popular measures. And vice versa: universities that are disproportionately unsuccessful can end up with high rankings on the popular measures. For instance, when comparing all 50 flagship state universities, those in Illinois, Connecticut, and Wisconsin rank among the several best on the relevant-pool-based measures proposed in this study. However, they rank near the bottom on the Pell Share and Bottom Quintile measures. The reverse is true of the universities of Maine, Montana, and New Mexico: they rank poorly on the relevant-pool-based measures but look stellar on the popular measures.
Better measures are possible: Relevant-pool measure reveals student representation at all income levels. Rather than comparing an institution’s student enrollment to national averages, or focusing only on low-income students, the relevant-pool measure allows institutions to see the big picture of enrollment or focus on a particular part of the income distribution that interests them (see figure below).
Universities can engage in sound self-evaluation with the right measures and have a better chance of attaining their missions if they do. Hoxby and Turner lay out a process for a university to engage in self-evaluation, noting that leaders and constituents must start by defining the university’s mission and constraints. With those in hand, the relevant-pool measure would allow a university to measure itself accurately against its own goals.
“We have purposely avoided ranking all institutions because that would require us to assert the relevant pool—and thus the mission—of thousands of institutions of higher education in the United States. This is not our right,” say Hoxby and Turner. “[However,] a university that used our proposed relevant-pool-based measure would not find a conflict between pursuing its mission and providing opportunities to students regardless of background.”
For more details, read “The Right Way to Capture College ‘Opportunity’: Popular Measures Can Paint the Wrong Picture of Low-Income Student Enrollment.” To speak with the authors, please contact Jackie Kerstetter at firstname.lastname@example.org. The article is available now on educationnext.org and will appear in the Spring 2019 issue of Education Next, available in print on February 27, 2019.
About the Authors: Caroline Hoxby is the Scott and Donya Professor in Economics at Stanford University and a senior fellow of the Hoover Institution. Sarah Turner is University Professor of Economics and Education and Souder Family Professor at the University of Virginia. This article is based on “Measuring Opportunity in U.S. Education.”
About Education Next: Education Next is a scholarly journal committed to careful examination of evidence relating to school reform, published by the Education Next Institute and the Harvard Program on Education Policy and Governance at the Harvard Kennedy School. For more information, please visit educationnext.org.