“There are so many digital resources out there, I am lost as to which ones are good.
I usually try things that some of the more technology-knowledgeable people I teach with [use].”
From “Teachers Know Best,” Bill & Melinda Gates Foundation, 2015, page 21
Teachers in the personalized learning (PL) schools we visit are using a wide range of digital tools—sometimes picking up and dropping them at a rapid clip—but their decisions about which tools to use generally aren’t guided by systematic evidence. Instead, teachers tend to rely on their colleagues for advice. That’s understandable, but it means that teachers have little assurance of a product’s effectiveness, and students tell us they feel like guinea pigs as teachers cycle through different tools.
Knowing that teachers will turn to their professional networks for advice, an urgent question for the field is: Are there ways to enrich those networks with more systematic evidence on the quality and impact of digital tools?
Since teachers are far less persuaded by research studies on interventions than by their colleagues’ own experiences with interventions, the answer probably doesn’t lie in creating a new, massive clearinghouse of products or research studies.
First, some of these clearinghouses already exist. Consumer Reports-style websites like EdSurge and Common Sense Education, for example, cover thousands of technology products for a wide range of subjects and grades. While organizations can, like EdSurge, provide a “concierge” service to help schools and districts find digital tools, many educators won’t have access to such supports, nor will they have the time to pour over multiple websites to find research-based tools.
Second, the K–12 educational technology marketplace is massive and growing at rapid-fire pace. Stacey Childress, CEO of NewSchools Venture Fund, recently wrote that investments in K–12 technology companies ballooned from roughly $91 million in 2009 to $643 million in 2014. The research community just can’t keep up with this dramatic expansion of companies and digital offerings. Products without any research (much less rigorous research) will continue to be available to teachers and find their way into classrooms.
Third, even if rigorous research happens, if it regards digital tools as a treatment (akin to a pill), it may overlook a critical factor: how any technology’s effects depend on the interaction between technology, teachers, pedagogy, and the context in which it is all happening. Even if a tidy randomized trial shows positive impacts for a digital tool, it is still important to consider how any given tool fits into the entire instructional program of classrooms and schools.
Given all this, helping teachers, schools, and districts learn how to generate and use evidence themselves may be a promising path toward injecting more evidence into decisions about digital tools for the classroom. Several initiatives, methods, and tools already available seem a logical place to start. Examples include:
• The Proving Ground initiative at Harvard University’s Center for Education Policy Research helps districts and charter school networks design and use a deliberate, analytical approach to gathering and using evidence to test digital tools they might adopt systemwide.
• The Ed Tech Rapid Cycle Evaluation (RCE) Coach, created by Mathematica Policy Research, gives schools and districts a process to follow and a tool to evaluate educational technology. The tool walks practitioners through a five-step process covering everything from planning an evaluation to summarizing the results. (Program materials say the typical RCE lasts three months start to finish.)
• The Carnegie Foundation for the Advancement of Teaching has created a range of resources. It also hosts an annual summit to help educators and others use a problem- and user-centered approach to learning and improvement, which leverages rapid testing and networked learning communities, to better their classrooms and schools.
More homegrown examples are also popping up. In Colorado, districts grappling with PL have formed a network to problem-solve issues jointly. The regional support agency that coordinates the network walks teachers through a Plan-Do-Study-Act inquiry cycle focused on a particular PL problem, with the goal of building this analytic process into their daily work.
At this point, we can’t say for sure that these initiatives, methods, and tools really work. But teachers’ hunger for guidance and information on digital tools in a fast-changing tech landscape certainly suggests that these and other approaches toward practice-based evidence generation would be useful initiatives for districts and partners to explore in earnest.
— Betheny Gross and Michael DeArmond
Betheny Gross is senior analyst and research director at the Center on Reinventing Public Education at the University of Washington Bothell. Michael DeArmond is senior research analyst at the Center on Reinventing Public Education.
This post originally appeared on CRPE’s The Lens and is the seventh installment in their “Notes From the Field” series.
Last updated May 8, 2017