Straight Up Conversation: RAND Education Chief Darleen Opfer



By 01/18/2019

Print | NO PDF

Darleen Opfer serves as director of RAND Education and Labor, a division of the nonprofit, nonpartisan RAND Corporation. She leads a 200-person staff that conducts research for major government agencies and private foundations. Along with her extensive work in the U.S., Darleen has researched teacher professional development in countries such as England and Turkey, and served as an education advisor in Norway, Israel, India and South Africa. Before RAND, she was director of research and a senior lecturer at the University of Cambridge. I recently had the chance to chat with Darleen about the state of education research, and here’s what she had to say.

Rick Hess: Darleen, readers probably have a vague notion that RAND Education does research and analysis, but can you explain more concretely what it is you do?

Darleen Opfer: RAND was founded 70 years ago to provide objective analysis on defense issues for the U.S. government. In the late 1960s, we expanded our work to social and economic policy areas such as education and health. For several decades, RAND Education has conducted research on schools and other educational institutions from pre-K through postsecondary, but recently we combined the education and labor research groups. We’re excited about this new structure because it allows us to connect our education research with our significant body of research on workforce skills needed for individuals to be successful. I think what people would find surprising about RAND is the breadth of research that is conducted. Our education and labor research is quite tame compared to work my colleagues are doing to track guns on the dark web, fight the opioid crisis, and clean up space junk.

RH: How did you wind up at RAND, anyway?

DO: RAND found me. I was at the University of Cambridge, which was amazing, but I became increasingly frustrated with research that didn’t necessarily make a difference in the lives of kids and teachers. RAND offered me the opportunity to be part of an organization whose research makes a difference.

RH: What are a couple of the most interesting projects currently on your docket?

DO: We just released our final report about best practices for school districts to implement summer learning programs. This report is a product of a six-year study sponsored by the Wallace Foundation. Through this randomized controlled trial, we assembled the largest dataset out there about summer learning—over four summers, we conducted thousands of hours of observations, surveyed thousands of summer teachers, and did hundreds of interviews. We will be presenting very specific recommendations about how to use time, how to train and recruit teachers, and when to plan for summer programs. Another new research area, which we call the RE: Work initiative, focuses on the future of work and the role of education in powering economic prosperity for individuals. This effort arose out of our decision to bring together education and labor researchers to talk about why the education system and the labor market seem to be leaving so many people behind. Ultimately, we want to help create more pathways to the middle class.

RH: This past summer, RAND issued the final results of a seven-year evaluation examining the Gates Foundation’s massive Effective Teaching initiative. Most observers regarded the results as quite disappointing for a once-heralded effort. Can you talk a bit about that project and the takeaways?

DO: It was unfortunate that the coverage of that report was so negative in the education press. The Gates Foundation deserves credit both for sticking with their intervention and the evaluation. A major problem I see in education research is that people jump from one thing to the next without giving interventions enough time to even have an impact. From the evaluation, we gained insights about why the initiative didn’t make an impact, which can be used to help future initiatives succeed—if people are willing to pay attention to the lessons learned.

RH: And, to your mind, what are a couple of the key lessons that people should learn?

DO: We learned that teachers and principals valued some aspects of the evaluation system, particularly the classroom observation rubrics which not only provided teachers with more detailed feedback than before, but also helped to create a common language around what good teaching looks like. At the same time, once the evaluation measures became high stakes, teachers’ support for them declined, and principals seemed less willing to provide negative feedback—a finding that has been reinforced in other research. We also found that implementing these types of systems places a heavy time burden on principals, which suggests that districts or Charter Management Organizations [CMOs] might want to explore alternative approaches that reduce demands on principals’ time by shortening observations or having someone other than the principal conduct some of them. A final key takeaway is that the significant turnover in the leadership of the districts and CMOs proved to be a barrier to consistent messaging and coherent implementation.

RH: How uncomfortable is it to issue a report that says a client’s big project didn’t pan out as hoped?

DO: It can depend on the client. We never want to take anyone by surprise, so we would make sure the client is briefed before the report is published. We typically give the client an opportunity to review all products and share any concerns, though we ultimately decide whether and how to address them. In the case of the Effective Teaching report, we provided interim reports and communicated informally with the Bill and Melinda Gates Foundation on a regular basis. The Foundation knew what was happening and still stuck with it. I think they understood that it is as important for the field to understand why something didn’t work as it is to understand why something did. If you have a client who cares more about the field than they do about the hit they may take in the press, then putting out a report such as we did for the Effective Teaching project isn’t that difficult.

RH: What kind of safeguards do you all use to reassure outside observers that RAND researchers aren’t catering to clients or pulling their punches to keep clients happy?

DO: RAND’s mission is to produce research that makes a difference and is in the public’s interest. We reserve the right to publish independent research and regularly walk away from doing work when a potential client disagrees with this. Every report is also peer reviewed both internally and externally to make sure what we are doing meets the highest standards of quality. The insistence on both publication rights and the extensive nature of our peer review process is unique in the contract research world but central to RAND as an organization.

RH: Speaking of contract research, the Wallace Foundation recently commissioned a RAND report on Social and Emotional Learning [SEL] interventions. Given all the hype around SEL, I’m curious: What did you all find—and how much confidence should parents and educators have in these interventions?

DO: As you know, there’s been a proliferation of new instructional programs and materials to promote SEL. Wallace asked RAND to generate guidance for education leaders who want to use their Every Student Succeed [ESSA] funds to pay for SEL programs. We found that there are numerous programs that are evidence-based when applying the ESSA definitions, but several factors—including the uneven rigor of the research designs, the lack of high-quality outcome measures, and the importance of contextual conditions in influencing how programs are implemented—limit our ability to provide definitive advice on what will work in a given school or district. So to answer your questions, the jury is still out on effectiveness of SEL programs. We recently released a follow-up report that provides practical tips on how to conduct an evidence-based SEL needs assessment tailored to local contexts.

RH: When it comes to professional development [PD], we all know how much faith people tend to put in “better PD,” and how skeptical many teachers and researchers are about the value of all this. You’ve studied PD across the U.S., England, and Turkey. What can you share on this score?

DO: PD for teachers is often of low quality, but it doesn’t need to be. PD is a significant investment for districts and states, yet information after the fact about whether teachers implemented what they were taught or whether student learning improved is not collected. Some of our newer work is examining more customized, job-embedded approaches to PD, which are more likely to influence teachers’ practices and improve student learning. But I don’t think we’ll see a sea change in the quality of PD until policymakers decide that PD funding cannot be used for a program that lacks sufficient evidence of effectiveness. Until that happens, many PD providers will not be incentivized to improve what they offer.

RH: So, what does customized, job-embedded PD look like? Is there an example that comes to mind?

DO: This is PD that is tailored to a teacher’s role and context rather than relying on a one-size-fits-all format such as a large group workshop. High-quality PD that is job-embedded provides teachers with opportunities to try out what they’re learning and to get feedback on their practices in real time. This can occur through individual coaching and mentoring, for example. Professional learning communities in which teachers collaborate with their colleagues to plan lessons or review student work can also be a form of job-embedded PD because teachers are learning from one another. We’ve seen these models in many schools we’ve studied, though often conditions, such as limited time or funding, hinder the implementation of these models.

RH: Another intriguing project you all have tackled is a multi-nation examination of instruction in middle school math. What did you learn?

DO: Our middle school math study, the TALIS Video Study for the OECD, looks at the relationship between teaching practices and students’ achievement and social and emotional outcomes in mathematics. A study like this has never been done before because it’s difficult to develop cross-cultural instruments that can measure teaching, and the fielding requirements are complex. An interesting thing we’ve learned already is how much variation in mathematics content exists. Eckhard Klieme, at the Leibniz Institute for Research and Information in Education in Germany, worked with the 8 participating countries’ mathematics experts to map curriculum topics taught to students between 13-15 years old. That mapping exercise resulted in only two common topics—linear equations and quadratic equations. We selected quadratic equations for the focus of the study, but we’re finding out that what counts as quadratic equation content also varies across the countries. This variation is important to understand because too often we expect students to demonstrate understanding when they haven’t even been given the opportunity to learn.

RH: From your perch at RAND, you’ve got a bird’s eye view of so much of the education research happening in the U.S. and around the world. What are a couple things that have caught your eye that may be news to those of us more enmeshed in our day-to-day?

DO: With the growth in technology applications in schools, combined with advancements in data processing and analysis capabilities, we need to think more about what the next generation of ed-tech reforms will look like and how educators can create the conditions to maximize their benefits and minimize harms. But we also need to do so in a way that doesn’t further harm the teaching profession. There has been scope creep, and the demands on teachers to be academic instructors, social and emotional instructors, first responders, and so on can be overwhelming. Advances in personalization of education, while exciting, make lesson planning and progress monitoring even more challenging. As education delivery and expectations change, we need to develop a sustainable and fulfilling teacher career trajectory and be able to forecast needs and demands of the future teacher labor market.

RH: Final question: What’s an especially promising development you see when it comes to education research? And, what’s the one development in education research that you find most disconcerting?

DO: The use of randomized control trials and other rigorous research designs has grown tremendously, resulting in valuable evidence on education programs and practices. At the same time, we are starting to see the limits of these approaches; there has been a replication crisis not just in education but in several fields, where we’ve seen that positive effects often don’t hold up when an intervention is studied by a different research team or in a different context. This points to the need to better understand what’s driving varying results. Because of that, I think implementation is finally getting its due in research. We all know how critical it is to implement a program long enough and well enough to see results, and how often that doesn’t happen in schools. So I’m happy to see increased focus on how educators implement reforms.

But something I find problematic is that even with the increase in quality, we are not seeing a comparable increase in use of research. One of my priorities is to ensure that RAND’s work is conducted and disseminated in a way that will improve decisionmaking—this is RAND’s core mission and is the reason researchers such as myself work here. In order to see more research being used, we need to be clear on the utility of our research. Here at RAND, we’re working on how to improve our ability to tell educators what they can/should do based on the research evidence we produce. The onus is on the research community to help educators use evidence-based practices; if we do that, we can move the needle significantly when it comes to bettering education systems worldwide.

— Frederick Hess

Frederick Hess is director of education policy studies at AEI and an executive editor at Education Next.

This post originally appeared on Rick Hess Straight Up.




Sponsored Results
Sponsored by

Harvard Kennedy School Program on Educational Policy and Governance

Sponsored by

Notify Me When Education Next Posts a Big Story