Recently, Education Week‘s “Living in Dialogue” blog featured a number of provocative posts on Teach For America. Phil Kovacs, an assistant professor at the University of Alabama-Huntsville, penned a guest post that offered a sharp critique of TFA and the research supporting its efforts. There was also an impassioned back-and-forth between two TFA corps members on TFA’s “locus of control” concept. Given high interest in TFA, the relevance of research on TFA to the broader teacher quality agenda, and my own long, complicated history with TFA as a critical friend, I thought it worth sitting down with TFA’s VP for Research Heather Harding to get her take. (Full disclosure: I recently hosted a working group for TFA which pulled together TFA leaders and a number of outside researchers to discuss “next generation” research possibilities. Veteran readers will also recognize Heather as a former RHSU guest blogger.)
Rick Hess: Heather, what’s your role with TFA?
Heather Harding: I am a vice president of research at Teach For America. Our focus is to initiate and help facilitate external partners doing research on the impact of Teach For America. I’m essentially a matchmaker or a conductor for all the folks internally who are working on programs and continuous improvement and the larger research community.
RH: At this point, TFA has been with us for a touch over 20 years. What do we know about TFA at this point? If there are three or four key findings, what are they?
HH: We know that Teach For America is good at identifying the folks who are going to be leaders in a variety of sectors and redirecting their energy towards the education sector. That includes classroom teaching but it also speaks to education leadership, policy, and those sorts of things, with entrepreneurship being a key piece of that. The other thing we know is that Teach For America corps members tend to outperform their peer teachers, both beginning and more experienced, in math and science. And people can quibble because some of those effect sizes are small, but if you look through the trend line over time, even in the early studies, you see this pronounced effect in math and science teaching.
And the third thing that we know is that Teach For America programmatically has made dramatic changes in training and ongoing support that seem to have allowed us to maintain quality as we grow to scale. The difference between training and supporting 500 teachers in the 1990s and 4,000 teachers, 6,000 teachers in the new millennium is [huge]. It’s something that we’ve had to think about; about how to maintain quality over time, in selecting them, training them, and then offering this development program. And we haven’t seen a downward trend in the results on student achievement, so I think you have to believe that we’re maintaining quality and paying attention to continuously improving the model.
RH: Recently, there’s been criticism of TFA’s research record. Philip Kovacs, a professor at the University of Alabama-Huntsville, suggests that we don’t really have much sense of how effective TFA teachers are, that we’re not doing a very good job of understanding their impact, and that we’re paying insufficient attention to the effects of TFA-induced turnover. What are your thoughts on this score?
HH: In the last five years, we’ve been relatively fortunate that, one, there’s been a number of studies mainly coming out of the states with stronger data. So New York has strong data sets [as do] North Carolina, Louisiana, [and] Tennessee. Policy folks and economists interested in teacher quality and teacher effectiveness have [been able to] conduct studies that we’ve been happy to participate in that compare teachers from various sources.
The Kovacs debate is largely one that relies on the peer review process. [Ed. Note: one of Kovacs’ criticisms surrounding a study by George Noell and Kristin Gansle of Louisiana State University and hosted by the National Council on Teacher Quality on TFA in Louisiana was that the study was not peer-reviewed.] We think that’s important, but we also think that if you look at the evidence, both peer reviewed and non-peer reviewed [but featuring] a standard methodological rigor, that we see that there’s clearly a pattern that Teach For America corps members achieve academic gains that are equal to or larger to those of other new teachers and, in some instances, more experienced teachers. It’s a small relative advantage, but it does seem relatively clear in math and science and high school…[and] we see that other areas like middle school, English, language arts are slowly catching up. So we feel encouraged.
Many of these studies come out initially in a pilot form or are self-published and then they go through the peer review process. So we see that as important, but academic processes are long and we’re a program that changes our model and tries to make improvements every year. So we want to grab whatever evidence we can. And we also hope that as data systems become stronger, we can have these kinds of studies in every state. We’ve got ones going on right now that we’re collaborating with or participating in Missouri. We’re trying to get one up in Florida. It looks like there’s going to be one in Arizona. We really welcome a lot of activity on this front.
RH: This doesn’t necessarily address the concern that much of this work has not appeared in academic journals or undergone peer review. How do you respond to that concern?
HH: I think that the methodology across [the studies] is very similar. While all of them haven’t been through peer review, I don’t think that they have huge methodological challenges. As you know, there are all kinds of philosophical wars about methodology and, frankly, the relevance of standardized test scores. We think that’s one vehicle to consider our impact. We’d love more studies on different metrics.
One of the things that we don’t necessarily have a lot of control over is what a researcher decides to do with the study that they write. We are supportive of people going through the peer review process…[but] we’re partnering with folks who are going to do research probably with or without us.
RH: Now, how about the critique that the research focuses fairly narrowly on value-added reading and math scores? Kovacs suggests that TFA places a premium on driving those scores, and therefore, while it’s not a surprise that TFA teachers seem to do okay by that metric, it’s unclear whether the students are benefiting to the degree that value-added might imply.
HH: I think we want to know more about how to better study those other things. We’re very interested in that. And in our internal system, we actually use our “teaching as leadership” rubric to test for those things that aren’t necessarily going to show up on a test score. I think where you have a great test you’re going to have good teaching and learning. When you have a not so great test, you might be concerned. So, while we think the tests are telling us something important, we don’t think they’re the only metric out there.
However, the currency of policy research [today] is the test score. That’s really the legacy of NCLB that we all have to live with. It’s not telling us nothing, right? So it’s not a useless exercise to look at the student learning that’s reflected in a test score. I look to the lessons that Louisiana has provided. The state department took the initiative and looked at their teacher prep programs comparatively and used value-added to do that, and then used that information to push back on programs where they were falling down. That’s how we use this information.
That being said, we also look at observational data on teachers. We’re…looking to add some student surveys. We’re interested in all of it. The fact that we have work focused on student test score data doesn’t mean that we are exclusively interested in that. We’re interested in that, no doubt, but we also think you need to get as much information as you can. Test scores don’t really predict [a student’s] destiny and their educational opportunities.
RH: Speaking of which, there was a recent debate between two TFA corps members about the whole “locus of control” question and whether TFA’s commitment to having its corps members drive student learning means that TFA can seem dismissive or unaware of the other challenges in children’s lives. How do you think about this challenge when evaluating teacher performance?
HH: Our rubric is more expansive than just measuring students’ learning through test scores. If folks look at our rubric they’ll see that we’re looking at things more holistically…Just in the last year we’ve begun to look at creating a richer portfolio of data that we can collect from teachers about their impact in the classroom. It includes formative assessments, both off the shelf as well as developed by teachers. It includes observational data. In the last year, we’ve incorporated a real shift in language that talks about transformational teaching…that makes a difference on any growth measure that you might select, but that it’s also important for the work we’re trying to do that teachers consider what would put kids on a different life trajectory and what that’s going to mean. So you might imagine that it’s good for kids to know their multiplication tables, but it’s also important for them to understand if they want to be an astronaut or a medical doctor, what would the course sequence look like and can they see themselves filling those roles?
RH: What are a couple current research relationships that TFA is involved with?
HH: We’re in an ongoing relationship with Ed Labs, Roland Fryer’s outfit at Harvard. He has continued to have an interest in how programs can further engage young leaders in education reform. We did two studies that came out over the summer focused on our selection model and on our alumni’s perceptions and their continuation and work in the education sector.
We’re going to continue to look again at selection and, in particular, we’re going to look at how to better screen candidates…We’re also going to do some testing around professional development interventions that seem to make a difference for impact on value-added. We have an ongoing relationship with Monica Higgins, who is looking at our alumni impact, thinking about whether and how our folks become interested in social entrepreneurship and what kinds of things we do or what kind of experiences they have [that prepare them for] those challenges.
We want to know a little bit more about Teach For America’s alumni long-term and their retention in the sector. We have another project that’s looking at the relationship between Teach For America corps members in a school community and the rate at which students in that school apply to more selective colleges. And this work is being done by a young scholar named Jonathan Meer, who is at Texas A&M, along with Caroline Hoxby. It’s not a causal relationship, but the correlation that if you bring in folks who have a higher-profile college experience, that might encourage young people to apply to a wider variety of schools.
RH: If you had to name a couple key research priorities for TFA going forward, what are they?
HH: We want to continue to understand the value added by our teachers in every market. We’re a national program and we have studies that look at the impact on student achievement in about six states. We’re in 30-plus states, so we need these studies all over because our hunch is that the teacher market is different. We want to continue to do that work and find good partners.
We know that a big part of our mission is focused on what alumni do. We have a long way to go to figure out what we mean by leadership in the education sector, so we need to do some internal work, but we also want to keep tracking what our grads do and what our alums do. We’re interested in their role in school leadership and understanding the barriers for them moving forward. I think the Fryer and Higgins studies are really cutting-edge and we want to continue that momentum.
Finally, I would say that we need to start thinking about the macro impact at Teach For America. So what has it meant given that we’re 20 years old? What has it meant for Teach For America to be in the ed reform sector, what’s been the impact on policy, on how we think about what investments to make, and on Teach For America’s impact in communities where we’ve been for 20 years?
RH: There are voices in the education research community who have felt that TFA is not that interested in the traditional education research space. I’m curious whether you think TFA has contributed to that impression and whether you’re interested in working with researchers who are not already partnering with you?
HH: I think that for a long time Teach For America was small enough that helping somebody gather data for a relatively small impact study was not very interesting or didn’t seem like it would be a worthwhile pursuit. I think that we are operating at scale now and there are a lot of opportunities to partner with us and get access to some of the data that we have. We have a robust network of universities that partner with us so that our folks can get certified and get masters degrees. Faculty on those campuses have some advantage in terms of having access to programs. But my team at Teach For America fields all kinds of requests to do research with us and we also go out looking for people to do that kind of work with.
We don’t fund a lot of those activities but we do partner with people to go out and identify funding. I think that’s sometimes been the challenge. We’ve been criticized for not being open, but in my four-year tenure, I think we’ve only said “no” to a couple of proposals that have been presented to us.
RH: If somebody wanted to reach out to you guys, who is the appropriate person to reach out to and what’s the best way to get a hold of them?
HH: On our webpage, on the research section, we have an email address that goes right into our request system. Or people can reach out directly to me: firstname.lastname@example.org.
This blog entry also appears on Rick Hess Straight Up.