In a recent piece, my colleague Tom Arnett dissected the question, “Does blended learning work?” His answer? It depends.
He is right. Researchers and school leaders asking that question are asking the wrong one. The question isn’t whether online or blended learning works—we have more than enough evidence (see here and here, for example) that it does in certain circumstances when done well. Equally so, just because a school adopts blended learning does not mean it will automatically achieve good results. A better question is how to do it well for different students in different circumstances.
This question matters—and receives far too short shrift. It is also among the questions that motivated Heather Staker and I to write our just-released book, Blended: Using Disruptive Innovation to Improve Schools. The book is meant as a practical design guide for educators to help them with the front end of creating sound blended-learning environments.
For example, from the get-go we seek to help educators avoid one of the biggest mistakes in implementing blended learning, which is deploying technology for its own sake, rather than to solve a meaningful problem or achieve an important learning goal.
Schools around the world are adopting blended learning to personalize learning, increase access and equity, and control costs. They want to create a student-centered learning system for all students, and blended learning is the most promising way to do so at scale. What the educators in those schools want to know are what are the right strategies and tactics to use so that blended learning boosts each student’s fortunes?
It’s not easy to get it right. When the Education Achievement Authority (EAA), the state of Michigan’s school turnaround district, launched in the fall of 2012 with 15 schools, hopes for rebirth in Detroit were high in many quarters. The system’s first superintendent, John Covington, adopted an ambitious blended, competency-based model for its schools powered by Agilix’s Buzz software. Has it worked? It’s complicated.
The Fordham Institute recently published an important report, Redefining the School District in Michigan, in which it discusses the competing evidence and details the EAA’s many travails.
According to certain measures, the schools seem to be succeeding, whereas other measures and analyses paint a bleaker picture. In Blended, we write about one of the apparent success stories. At Nolan Elementary-Middle School in Detroit, for example, in 2013, at the end of its first year of turnaround, 71 percent of students achieved one or more years of growth in reading and 61 percent in math. Nolan ranked third out of 124 Detroit schools in reading growth. Nolan uses the Flex model of blended learning—a disruptive innovation relative to the traditional classroom—which many schools in the EAA use.
Today’s “years of growth” measures are often tricky though—both to equate to a state’s accountability system as well as to understand what they really mean. Certain assessments that produce measures of growth, including that used by the EAA, do not do so on an absolute scale but on a relative one.
In other words, if an assessment says a student has grown two years, it generally does not mean today that the student went from, say, a “second-grade math level” to the “fourth-grade level.” Instead, the measure is likely comparing that student to others in his “norm group”—students with like characteristics such as level of achievement, age, and so forth. If the student grew more than the average child in that group—which would be calculated to equal one year of growth—then the assessment report would say she grew more than a year.
Here’s the challenge: If students in a norm group toward the bottom of achievement—at the 5th percentile, for example—don’t grow much on an absolute basis, then two years of growth might not be all that impressive. If students toward the top, say, at the 95th percentile—grow a lot on an absolute basis each year, then less than a year of growth might not be that bad. Assessments that calculate growth trajectories in this way could, in other words, bake in lowered expectations for some students and exceptionally high expectations for others.
My takeaway from the Fordham report though is that whatever problems the EAA’s schools have had, it doesn’t seem as though the learning model has been the prime cause of those problems per se. The report’s description of the model was quite positive, and reports by various visitors indicate that the models are working well. Governance issues seem to be more at the heart of the EAA’s challenges.
With that said, the level of execution required to implement a disruptive model of blended learning in the hardest-to-serve parts of Detroit flies in the face of the recommendations from our new book. Disruptive solutions are in general meant for the simplest problems at the outset, not the most complicated that need sustaining, not disruptive, innovations. In general today, sustaining models of blended learning are better matches for core problems. Not every school chooses to follow this advice, and that’s certainly OK, but the caution for schools choosing to use a disruptive model for a core problem is that implementation is likely to require far more effort to explain the choice to the community, prepare for the launch, and execute than if the school had gone with a sustaining model of blended learning.
Given the complex situation surrounding the EAA, that might be a caution worth thinking through.
This first appeared on Forbes.com