Grading Schools

Can citizens tell a good school when they see one?

Video: Marty West talks with Education Next.

An unabridged version of this article is available here.


Never before have Americans had greater access to information about school quality. Under the federal No Child Left Behind Act (NCLB), all school districts are required to distribute annual report cards detailing student achievement levels at each of their schools. Local newspapers frequently cover the release of state test results, emphasizing the relative standing of their community’s schools. Meanwhile, new organizations like GreatSchools and SchoolMatters aggregate this information and make it readily available to parents online.

But do all these performance data inform perceptions of school quality? Or do citizens base their evaluations instead on such indicators as the racial or class makeup of schools, regardless of their relationship with actual school performance?

In discussions of parental choice in education, researchers have frequently speculated that parents would base their evaluations of schools primarily on the characteristics of their student bodies. Columbia University professor Amy Stuart Wells, for example, concluded that the decisions of St. Louis parents participating in a voluntary desegregation program were based “on a perception that county is better than city and white is better than black, not on factual information about the schools.” And even if some parents base their decisions on educational quality, many observers worry that low-income and minority parents will be less informed about or interested in school quality, placing their children at a disadvantage in the education marketplace.

The evidence on these questions available to date comes from small-scale studies of specific school districts, making it difficult to reach general conclusions about the degree to which parents and the public at large are well informed about the performance of local schools. We are now able to supplement that research with data from a nationally representative survey of parents and other adults conducted in 2009 under the auspices of Education Next and the Program on Education Policy and Governance (PEPG) at Harvard University. Because we knew the addresses of respondents in advance of the survey, we were able to link individual respondents to specific public schools in their community and to obtain their subjective ratings of those schools. We also gathered publicly available data on student achievement in the same schools, making it possible to compare respondents’ subjective ratings to objective measures of school quality.

Our results indicate that citizens’ perceptions of the quality of their local schools do in fact reflect the schools’ performance as measured by student proficiency rates in core academic subjects. Although citizens also appear to take into account the share of a school’s students who are poor when evaluating its quality, those considerations do not overwhelm judgments based on information about academic achievement.

Public Perception and Objective Quality Measures

The 2009 Education Next–PEPG Survey was administered to a nationally representative sample of 3,251 American adults, including an oversample of 948 residents of the state of Florida. The Florida oversample was conducted in order to link perceptions of school quality to the unusually rich information about school performance available in that state. The survey was administered over the Internet by the polling firm Knowledge Networks in February and March of 2009. (For methodological details and complete survey results, see “The Persuadable Public,” features, Fall 2009.)

Before conducting the survey, we geo-coded the address of each respondent to latitude-longitude coordinates and a census block. We also obtained latitude-longitude coordinates for every U.S. public school from the National Center for Education Statistics. Using census blocks to place respondents within school districts, we then linked each respondent to the closest elementary, middle, and high schools (up to five schools of each type) operated by the local school district.

The survey asked all respondents this question: “Each of the following schools in your area serves elementary-school students. Which one, if any, do you consider your local elementary school?” It then offered each respondent a personalized list of the five closest elementary schools from which to pick; respondents were also allowed to specify a school that did not appear on the list. After a specific elementary school had been identified, the survey asked the respondent to grade this school on a scale from A to F. This same process was then repeated for middle and high schools.

We converted the A to F grades that respondents assigned to the schools into a standard grade-point-average (GPA) scale (A=4 and F=0). Of the elementary and middle schools our survey respondents rated, 41 percent received a B grade, while 36 percent received a C. In contrast, only 14 percent of schools received an A grade, 7 percent a D, and 2 percent an F. This distribution corresponds to an overall GPA of 2.57, or just below a B-minus average. Interestingly, respondents assigned their local middle schools grades that were, on average, one-quarter of a letter grade lower than the grades they assigned their local elementary schools (see Figure 1).

We measured actual school quality as the percentage of students in a school who achieved “proficiency” in math and reading on the state’s accountability exams (taking the average proficiency rate across the two subjects). School-level data on student proficiency were drawn from SchoolDataDirect.org for the 2007–08 school year, the most recent year for which test-score data would have been publicly available when the survey was conducted. Although the rigor of state content standards and definitions of math and reading proficiency vary widely (see “State Standards Rise in Reading, Fall in Math,” features), we are able to adjust for these differences by limiting our comparisons to respondents within the same state when examining the relationship between proficiency levels and school ratings.

To be sure, the percentage of students achieving proficiency in core academic subjects is an imperfect measure of quality, even when comparing schools in the same state. Given the strong influence of out-of-school factors on student achievement, any quality measure based on the level of student performance at a single point in time will be heavily influenced by characteristics of a school’s student body. At the same time, proficiency rates are the only quality measure available for a national sample of schools. They are determined in part by the amount students learn in school, and research suggests that moving to a school with higher proficiency rates does produce achievement gains.

Nor do we wish to claim that any judgment of school quality that does not correspond to test-score performance is uninformed or irrational. The ability to promote math and reading achievement is hardly the only dimension along which citizens are likely to evaluate their local schools. But we suspect that high test scores go along with other aspects of school quality that citizens value in their schools, so that evidence of a connection between student achievement and public opinion likely indicates that parents and other members of the public have the information they need to make reasonable judgments about their schools.

National Evidence

These data enable us to provide the first evidence on the extent to which citizens’ subjective ratings of specific schools correspond to publicly available information on their actual performance. Because other school characteristics may also influence perceptions of school quality, we incorporated into our analysis data from the National Center for Education Statistics on the racial/ethnic composition of each school, the percentage of students eligible for free or reduced-price lunch (an indicator of poverty), average cohort size (our preferred measure of school size), and pupil-teacher ratio (a proxy measure of class size) in the 2007–08 school year. We exclude high schools when analyzing the data for the nation as a whole because proficiency data are unavailable for many of them, and when available, typically reflect the performance of only a single cohort of students. We also adjust for whether the respondent was evaluating an elementary or a middle school to account for the fact that middle schools received systematically lower grades from survey respondents.

Figure 2 presents the strength of the relationship between citizen ratings of school quality and each of these school characteristics after taking into account the other key variables built into our analysis. The values of each variable except the one identifying elementary schools have been standardized to illustrate their relative importance. (In technical terms, the relationships presented for these variables reflect the effect of an increase of one standard deviation in the value of the characteristic in question.) The figure confirms that student proficiency rates are a significant predictor of citizen ratings of school quality. An increase of 18 percentage points in percent proficient (i.e., one standard deviation) is associated with a rating that is on average 0.16 grade points higher, or about one-sixth of a letter grade.

 

Examining the racial/ethnic and class makeup of a school’s student body in isolation would suggest that both are important predictors of citizen ratings, a fact that may explain the common perception that this is the case. In particular, schools with 25 percentage points more African American students received ratings that were 15 percent of a letter grade lower, while schools with 24 percentage points more Hispanic students received ratings that were 16 percent of a letter grade lower. Schools with 26 percentage points more poor students received ratings that were one-quarter of a letter grade lower.

However, when these variables are considered simultaneously and alongside school performance and resource measures, only the poverty indicator retains predictive power. Neither the percentage of students who are African American nor the percentage who are Hispanic is systematically related to perceptions of school quality. The percentage of students who are poor remains an important predictor of citizen ratings, with a relationship essentially as strong as that for proficiency rates.

Even after controlling for proficiency rates and other school characteristics, middle schools receive ratings that are, on average, 18 percent of a letter grade lower than comparable elementary schools. In other words, proficiency rates explain some, but by no means all, of the lower perceived quality of middle schools. This finding is of interest given recent research suggesting that middle schools have adverse consequences for student achievement (see “Stuck in the Middle,” research). In contrast, neither school size nor pupil-teacher ratio are important determinants of perceptions of school quality. In fact, the weak relationship between pupil-teacher ratio and school ratings is in the opposite of the expected direction: schools with larger classes receive somewhat higher grades, perhaps because effective schools attract more families to the neighborhood.

As noted above, it has often been speculated that disadvantaged groups are less informed about school quality than more-advantaged groups. But we find that the relationship between school performance and citizen ratings is as strong for African American and Hispanic respondents as it is for whites. The relationship between school quality and citizen ratings is also essentially the same for high-income and more-educated respondents as it is for low-income and less-educated respondents.

We also consider whether the relationship between school performance and citizen ratings is stronger for parents of school-age children, who are arguably the most connected to their local schools, or for homeowners, whose property values are influenced by school quality. Perhaps surprisingly, homeowners are no more sensitive to differences in school quality than are other citizens. However, the relationship between proficiency rates and school ratings is more than twice as strong for parents of school-age children than for other respondents (see Figure 2). An increase of one standard deviation in percent proficient is associated with a rating from parents that is one-third of a letter grade higher, as compared with 16 percent of a letter grade higher for the public as a whole. Parents also give low-scoring schools far lower ratings than do other local residents, but this difference narrows and eventually reverses direction as proficiency rates increase (see Figure 3). Like those of other citizens, parents’ ratings of local schools are not influenced by the schools’ racial/ethnic composition, school size, or pupil-teacher ratios. However, parents do appear to be somewhat more responsive than other citizens to school poverty rates and take an especially dim view of middle schools, assigning them grades that are 39 percent of a letter grade lower than otherwise similar elementary schools.

Finally, we consider the issue of differences in school quality across states. Because NCLB allows each state to set its own standards for proficiency, schools in different states with the same percentage of students achieving proficiency may be of markedly different quality if one state has high standards and the other low. The national sample allows us to examine the degree to which citizen ratings of school quality are responsive to performance levels relative to the nation or simply to differences in performance within specific states. The National Assessment of Educational Progress (NAEP) conducted every two years by the U.S. Department of Education provides evidence on the average performance of 4th- and 8th-grade students in each state in mathematics and reading. We use data from the 2007 NAEP to see whether respondents in states with higher-scoring students rate their schools higher, on average, than respondents in states with lower NAEP scores. That is, if we compare respondents whose local schools have the same proficiency rate as measured by their state test, do the respondents in states with better schools, as measured by student performance on the NAEP, assign their school higher grades? We find no evidence that respondents in general, or even parents, have information about school quality beyond the information provided on the state assessments. In other words, citizens appear to be taking cues about school quality from local comparisons or from information provided by their state testing system without taking into account the relative rigor of state standards.

Levels or Growth?

Our analysis yields strong evidence that citizens, and especially parents of school-age children, rate schools in a way that lines up with publicly available information about school quality. As discussed previously, however, the percentage of students scoring at the proficient level on state tests is an imperfect indicator of school quality, contaminated as it is by the fact that student achievement is influenced by a host of factors outside of a school’s control. A better, if still an imperfect, measure of school quality is the amount of growth in student achievement from one year to the next. To examine the correspondence of citizen perceptions of school quality and measures of test-score growth, we turn to our representative sample of residents of Florida, where the state accountability system evaluates schools based on both test-score levels and test-score growth. Because high-school performance data are widely available in Florida, we are able to include high schools in this portion of the analysis.

Florida assigns schools letter grades based on a point system with eight main components, which we divide into two categories: level-related points (percentage proficient in math, English, writing, and science) and growth-related points (percentage making learning gains in math and reading and the percentage of the lowest 25 percent of students making gains in math and reading). The level variable is highly correlated with the school quality measure (percent proficient) used in the national analysis, but the correlation between the growth variable and percent proficient is considerably weaker.

Our basic strategy is to compare the ratings Florida residents assigned to their schools both to test-score levels and to test-score growth at those schools. Because measures of test-score growth are less stable over time than measures of test-score levels, we average the points awarded to each school based on levels and growth over the previous three years. Adjustments are also made for the same demographic and school characteristics as in the national analysis. To make the results as comparable as possible to those reported for the national sample, we also scale the point variables so that a one-unit increase in each variable corresponds to a shift of one standard deviation in the performance distribution of Florida public schools.

The results indicate that Florida residents’ perceptions of school quality are even more responsive to differences in student achievement levels than are those of the national public. An increase of one standard deviation in the level variable is associated with ratings that are almost one-third of a letter grade higher after taking into account other school characteristics. We also find that perceptions of school quality in Florida are unrelated to student demographic characteristics, including the percentage of students who are poor, once we take into account levels of student achievement. Although we cannot be sure, both Floridians’ greater responsiveness to test performance and their lack of responsiveness to student demographic characteristics could reflect the transparency and salience of the state’s high-profile school accountability system.

When both the test-score level and growth variables are examined simultaneously, however, the relationship between level-related points and citizen evaluations of schools is almost twice as strong as for growth-related points. This suggests that citizen ratings do reflect differences in the growth in student achievement across schools, but that this is primarily because of the correlation between achievement levels and achievement growth.

The Role of Accountability Systems

So far we have shown that citizens’ assessments of schools are strongly related to objective measures of performance made available by state accountability systems. Yet it is difficult to determine whether respondents’ apparent sensitivity to actual quality is the result of publicly available information or simply direct experience with schools. The fact that parental perceptions track actual school quality more closely than those of other citizens, but the perceptions of homeowners do not, suggests that direct interactions with a school may be a more important factor than simply having a vested interest in acquiring information about local school quality. But do accountability systems also play a role in shaping citizen perceptions?

Again, Florida provides an ideal case for more detailed analysis. As noted above, the Florida Department of Education uses the total number of points received (i.e., the sum of level- and growth-related points) to assign each school a letter grade between A and F. These grades receive considerable media attention in Florida, so we might expect citizen ratings to be correlated with them. This expectation is confirmed in the data: a school grade that is one point higher (again measured on a standard GPA scale) is associated with a respondent rating that is 0.2 grades higher.

To test the hypothesis that publicly available information has an impact over and above direct observation of school performance, we can compare the ratings given by respondents whose schools were very close to the cutoffs in the point system used by Florida to assign school grades. We know that schools with more points received higher ratings on average, but might also expect to see a “jump” in the average rating at these cutoffs. Because schools on either side of the cutoff should be of essentially the same quality, we can interpret any jump in the rating observed at the cutoff as the pure effect of information provided by the school grade on citizen perceptions of school quality.

We focus our attention on the B/C cutoff, because that is the only one for which we have enough respondents assigned to schools near the cutoff to yield results with a reasonable degree of precision. Comparing respondents’ ratings of schools on either side of this cutoff suggests a large positive effect of receiving the higher (B) grade, with an increase in the grades assigned to schools in the range of of 36 to 57 percent of a letter grade. That the publicized school grades have a direct effect on respondent ratings over and above the relationship between ratings and the underlying point variables suggests that the signals provided by the state’s school accountability system do in fact affect citizen perceptions of their local schools.

Implications

The findings reported above represent the first systematic evidence that Americans’ perceptions of the quality of their local public schools reflect publicly available information about the academic achievement of the students who attend them. Importantly, disadvantaged segments of the population are no less informed about school quality than other citizens. Although the mechanisms explaining this responsiveness are not entirely clear, our evidence suggests that both direct experience with schools and the public dissemination of performance data may play a role.

It is worth emphasizing several limitations on this evidence of responsiveness. First, the relationship between actual and perceived quality is modest for citizens as a whole, although it is quite strong for parents, who have the most opportunities to observe schools and arguably have the strongest incentives to be informed. Second, both parents and the public appear to be more responsive to the level of student achievement at a school than to the amount students learn from one year to the next. Finally, citizens appear sensitive to relative differences in school quality within their state (as reflected in school performance on state tests) but insensitive to information on school quality in the state as a whole (as measured by statewide performance on a national assessment).

Even so, at least two policy implications emerge from our results. First, our finding that accountability ratings influence citizens’ assessments of their local schools coupled with the fact that citizen ratings are more strongly associated with achievement levels than with achievement growth suggest that featuring growth measures more prominently in school accountability ratings could cause citizens to pay more attention to this barometer of school quality. Second, our finding that citizen ratings are associated with student performance on state tests but not with performance on a national assessment suggests that a closer alignment of state standards (or a move toward common standards across states) might help citizens form more accurate perceptions of their schools. In particular, it could lower perceptions of school quality in states where many students perform poorly relative to national norms but are deemed proficient by the state.

Matthew M. Chingos is a postdoctoral fellow at Harvard University’s Program on Education Policy and Governance. Michael Henderson is a doctoral candidate in Harvard’s Department of Government. Martin R. West is assistant professor of education at the Harvard Graduate School of Education and executive editor of Education Next.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College