New Research Answers Whether Technology is Good or Bad for Learning

For years educators and scholars have debated whether technology aids learning or inhibits it.

In the most recent issue of Education Next, for example, Susan Payne Carter, Kyle Greenberg, and Michael S. Walker write about their research finding that allowing any computer usage in the classroom “reduces students’ average final-exam performance by roughly one-fifth of a standard deviation.” Other studies have shown similarly dismal numbers for student learning when technology is introduced in the classroom.

Yet there are also bright shining stars of technology use—both in proof points and in studies, such as this Ithaka study or this U.S. Department of Education 2010 meta-analysis.

So what gives? Since 2008 I’ve, perhaps conveniently, argued that scholars and advocates on both sides of this debate are correct. As we wrote in Disrupting Class in 2008, computers had been around for two decades. Even 10 years ago, we had already spent over $60 billion on them in K–12 schools in the United States to little effect. The reason quite simply was that when we crammed computers into existing learning models, they produced begrudging or negative results. To take a higher education example, when I was a student at the Harvard Business School, far fewer of us paid attention to the case discussion on the couple days at the end of the term when laptops were allowed, as we chose to instead chat online and coordinate evening plans. In that context, I would ban laptops, too.

When the learning model is fundamentally redesigned to incorporate intentionally the benefits of technology, say, in a blended-learning model, however, you can get very different results. To use another personal example, I fervently hope that the public school district where my daughters will go to school will comprehensively redesign its learning environments to personalize learning for each student through the use of technology. As we disruptive innovation acolytes like to say, it’s almost always about the model, not the technology.

This finding isn’t unique to the technology of computers in classrooms. It was true with chalkboards as well.

As Harvard’s David Dockterman recounts, the blackboard was reportedly invented in the early 19th century. The technology was adopted quickly throughout higher education in a lecture model to convey information to all the students at once. The first recorded use in North America was in 1801 at the United States Military Academy in West Point—ironically the location of the study that Carter, Greenberg, and Walker conducted—and it spread quickly.

Having observed the success of the blackboard in college, schoolhouses began installing the technology, but the teaching and learning changed minimally. The blackboards were largely unused because teachers had difficulty figuring out how to use them. Why? At the time, the prevalent model of education in public schools was the one-room schoolhouse in which all students, regardless of age or level, met in a single room and were taught by a single teacher. Rather than teaching all the students the same subjects, in the same way, at the same pace—like in today’s schools—the teacher rotated around the room and worked individually with small groups of students. As a result, the blackboard didn’t make much sense in the context of the one-room schoolhouse because the teacher rarely, if ever, stood in front of the class to lecture.

It wasn’t until the early 1900s when the public education system changed its instructional model—to today’s factory model—that the blackboard became a staple of American education. Lesson? The model matters.

Fast forward to today, and we see the same dynamic. A new—and very helpful—analysis of the research helps tease this out and perhaps can at last break the infuriating log-jam between those who argue technology is a distraction at best and those who argue it is an extremely positive force.

At J-PAL—MIT’s Poverty Action Lab—Maya Escueta (Columbia), Vincent Quan (J-PAL North America), Andre Joshua Nickow (Northwestern), and Phil Oreopoulos (University of Toronto; Co-Chair, J-PAL’s Education sector) released a review of more than 100 experimental studies (RCTs and RDDs) in education technology to examine the evidence across four key areas of education technology: access to technology, computer-assisted learning, technology-based behavioral interventions in education, and online learning.

Among the findings, according to the summary J-PAL provided:

• Computer-assisted learning, in which educational software helps students develop particular skills, is particularly promising, especially in math. This is likely because of the software’s ability to personalize by adapting to a student’s learning level and letting the student learn at the right pace for her, as well as the ability to provide teachers immediate feedback on student performance that is actionable. This is of course no surprise to those of us who have been excited about blended-learning models that personalize learning for students.

• Technology-based behavioral interventions—like nudging a student to register for a course—produce consistently improved learning outcomes.

• Initiatives that provide computers to every student in a classroom do not improve learning outcomes. That is very predictable given our research on the perils of cramming technology. I’ll repeat myself again here: You have to focus on the learning model first followed by the technology in service of that learning model. Initiatives that start with the technology almost always fail in my experience.

• Research on online courses is still early, but it appears that “blended” courses produce similar outcomes as in-person courses, which could drive down costs. In-person classes outperform fully online ones—a reason to still keep fully online courses focused on areas of nonconsumption, where the alternative is nothing at all and therefore not competing against an in-person course.

In my view, this is what I’d expect a review to find, as it points to the tremendous promise of technology to personalize learning (note: the outcomes here are still reliant on good learning design) and the peril of merely cramming technology in to existing, analog learning models.

Will this spur the research community to take note and sharpen the questions it asks about technology and learning going forward? Let’s hope so. It’s high time we move beyond a broken debate and simplistic research around whether technology in education is good or bad that serve no one’s interests.

— Michael B. Horn

Michael Horn is a co-founder of and a distinguished fellow at the Clayton Christensen Institute for Disruptive Innovation.

This post originally appeared on ChristensenInstitute.org

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College