Research tells us that disadvantaged students tend to be exposed to lower-quality teachers than their peers. We also know that these students are less likely to take upper level math courses in high school. Is there a connection? Dan Goldhaber and his research partners bring together these two threads of research in a new working paper released at the CALDER Research Conference in Washington, D.C. We sat down with Goldhaber to talk about his new paper, the politics of teacher evaluation, and the value of value-added.
This is an ambitious piece of research, bringing together these two strands of study. Let’s break it down a bit. Did you find that disadvantaged kids have lower quality teachers?
The short answer is yes. The longer answer is that quality is a little bit in the eyes of the beholder. So I always say quality “as measured by….”
And what do you measure it by?
Mostly based on “value added,” a statistical measure of the contribution the teachers make to student achievement on standardized tests. But we also see inequity in the distribution of teachers when considering teacher experience. And other research we’ve done has replicated the disparities using teacher licensure test performance. In some quarters, there is doubt about value added, so the fact that we see inequity based on other measures as well should convince readers that our value-added findings are accurate.
And is “disadvantaged” a measure of student income?
We use two different measures. One is under-represented minority status: African American, or Hispanic, or Native American/American Indian. The other is students receiving free or reduced-price lunches.
Why are disadvantaged kids stuck with the lower-quality teachers?
There’s a fair bit of research showing that schools serving disadvantaged students have more difficulty with staffing. We know, for instance, that they receive fewer (and less credentialed) applicants. And, there’s also evidence that higher-quality teachers are more likely to leave the schools serving lots of disadvantaged students. That likely happens because these schools are more difficult to teach in, and because the teacher labor market tends to treat all teaching jobs, at least within districts, as if they are the same.
Moreover, collective bargaining agreements often give more experienced teachers explicit advantages in attaining positions in a district when they open up. This also increases the likelihood that disadvantaged students have less experienced teachers, because more experienced teachers are more likely to move to more advantaged schools.
Your research then makes the connection between these lower-quality teachers in 4th to 8th grade and less student success in later years. How do you measure this?
We’re looking at the teachers that students have in 4th through 8th grade and two different measures: end of the 8th-grade test score and at the number of advanced math courses students take in high school.
And the definition of advanced?
Pre-calculus and above.
And what did you find?
We found lots of different things. One highlight that had nothing to do with teachers was that a lot of the gap we see in end of 8th-grade test scores and high school course taking between advantaged and disadvantaged students can be explained by a student’s 3rd-grade test. And while that’s not a real surprise, it’s still a little shocking when you see it, because it means that a lot of where kids end up is already kind of baked in by the 3rd grade. I think it suggests that if we are to make a real dent in achievement gaps, we need to be better about addressing inequity that exists early on, or become far more aggressive in interventions between 3rd grade and 8th grade.
But teachers played a role, too.
Yes. We found very consistent evidence that the value-added scores for teachers predict 8th-grade test scores. That’s something we know from other research. More novel is the finding that value added is also a strong predictor of later advanced course-taking. And here even early grade teachers appear to matter.
How do you isolate value-added effects from other factors? As you said, the 3rd-grade baseline predicts later success, and all sorts of issues affect disadvantaged students.
There are two parts to the question. You have to ask is the value-added measure a good measure of a teacher’s actual contribution to students? You do that through statistical procedure where you’re basically taking the kids who show up at a teacher’s doorstep and getting all the information that you can about them: their incoming tests, their poverty level, demographics, identification for special needs, etc., and trying to statistically factor those things out so that you are left with a clear picture of what teachers are contributing to student learning gains. My read on the literature is that value added actually does a pretty good job of identifying the contribution that teachers are making.
And then for our research, we have to both have a good measure of value added and ensure that when we’re using that measure, we are doing a good job of also accounting for other things that might be going on during a child’s schooling that might also affect 8th-grade tests and high school outcomes. I think that there’s room to be worried about whether we are actually capturing everything. That’s why in the paper, especially with regards to the high school course outcomes, we’re being careful about using causal terms and instead say things like, “The teacher distribution predicts, or is associated with, these high-school outcomes.”
Given what you found, what are the implications for educators and policymakers?
School systems and states certainly need to recognize that one teaching job is not necessarily the same as another teaching job, teaching in disadvantaged settings, in particular, is likely to be more challenging. Hence, they need to provide appropriate incentives to teachers if we want to address inequities. It’s pretty much that simple.
Some places are doing that. The District of Columbia has bonuses for working in impoverished, underachieving schools.
D.C. is a huge success story. Some places are recognizing the fact that a teaching job is not totally generic, that jobs differ from one another. That is unusual. It would be good if more places actually built in incentives.
Is this something school boards should be doing?
Sure, but providing incentives may be something that you need to do above the school district level given the difficulties of local politics. When I served on a school board, we knew when we moved a teacher from one school to another, we had a pretty good sense of whether that was going to cause an uproar. And it’s much more likely—not a great surprise—but much more likely that if you move a favored teacher from an advantaged school, that you are going to hear a lot from parents. You also worry about what that might mean in the next election. So there’s a lot about the way that the system works that works against disadvantaged kids having equal access to effective teachers.
So you’re talking about individual teachers making the difference?
Yes, our paper is just one more piece of evidence that you need to focus on individual educators to move the needle in schools. Indeed, everything we’ve learned about schooling over the last decade and half suggests the import of individual teachers. Unfortunately, as a country, we’ve learned the wrong lesson from Race to the Top and teacher evaluation reforms. There’s been a real backing off of a focus on individual teachers, but largely for political reasons because of the huge pushback against evaluation reforms. The empirical evidence continues to suggest we need to focus on individual teachers, their performance, and how to improve it.
You mentioned the politics of this. The whole value-added thing is so fraught, especially when it’s connected to teacher pay. How do you overcome that?
It makes sense to use value added at least as a diagnostic tool. But we should also recognize that it’s not the be all and end all. For example, I don’t think it makes sense to have a teacher’s performance evaluation solely based on value added, nor do I believe that happens anywhere, despite what you might hear. Unless you are willing to say we shouldn’t be doing any evaluation at all, then we have to acknowledge that any measure that we use is going to have some problems. And that’s just life.
Read more CALDER working papers here.
— Phyllis W. Jordan
Phyllis W. Jordan is FutureEd’s editorial director.