Why is Competency- Based Education So Hard to Study?

A few research pitfalls seem to be creeping into the still nascent world of K-12 competency-based education: first, the challenge of moving from discussing high-level theory to describing precisely competency-based practices. And second, going from identifying specific practices to designing sufficiently specific, appropriate evaluation to measure the effects of those practices.

Both of these tensions can make conversations about competency-based education feel speculative. The term “competency-based” often describes a wide range of classroom practices, but schools that call themselves competency-based may not subscribe to all such practices. And even when these practices are spelled out, we have yet to study them in isolation, to understand which—if any—drive student growth and in what circumstances. In order to really study competency-based models, the field may need more specific categories than “competency-based” to translate the theory into practice; and we likely need new research paradigms to evaluate these specific practices.

This month’s RAND study on three competency-based education pilot programs is a great example of these challenges. The study looks at how three relatively high- profile institutions in competency-based education—Adams County School District 50, the Asia Society schools, and the School District of Philadelphia—implemented competency-based pathways in five school districts during the 2011–12 and 2012–13 school years. Some of the findings echo much of what I observed through interviews with 13 schools in New Hampshire last year: competency-based education looks different in different contexts; schools implementing competency-based education face real technology challenges; and different students in such systems likely have different needs. The researchers took pains to try to isolate the effects of competency-based based education on student outcomes and dropout rates, but in some cases where unable to find statistically significant differences.

The study itself is a great read, but it also confirmed the tensions inherent in trying to study competency-based approaches. Here are three questions I find myself asking about research looking ahead:

Is it useful to measure the effects of a new philosophy?
Competency-based education may mean measuring credit differently; but it also means adopting a new philosophy about how students should progress through material. To complicate matters, many schools implementing competency-based education are also founded on additional and varying philosophies—MC2 in New Hampshire focuses on self-directed projects that link students to real-world experiences; BDEA in Boston aims to ensure efficient graduation for off-track youth; Summit Public Schools in the Bay Area uses sophisticated technology to drive toward personalization at scale.

If each of these models qualifies as competency-based, however, is it viable to isolate the “competency-based” approach from the other philosophies guiding these different systems? Is it even useful to do so?

Much like the last decades’ emphasis on differentiated instruction, or this decade’s focus on blended learning, competency-based approaches writ large may not be a useful—or sufficiently narrow—unit to study. In light of the cultural shifts implicit in a philosophical shift, there may be room for anthropologists or sociologists to try to capture qualitative differences between competency- and time-based systems. But I worry that researching “competency-based education” full stop—in schools that look so different—risks a research cycle that simply reifies a broad category with little ability to inform policy or practice.

Could a more precise taxonomy help measurement and implementation?
In light of the giant category that is competency-based education, it makes sense that researchers and practitioners have attempted to define the tenets of a competency-based system. The RAND study focuses on three key variables that schools focused on to varying degrees—flexible pacing, student choices to personalize learning, and evaluation based on evidence of proficiency. CompetencyWorks offers a similar five-part definition. These categories begin to break down “competency-based” into more manageable parts. Identifying such practices in the field with greater and greater descriptive precision will make it easier for practitioners to replicate practices and for researchers to evaluate them. Combinations of practice may also prove important categories: looking ahead, much as our blended-learning taxonomy work has aimed to do, the field might try to formulate a taxonomy that describes the universe of practices in competency-based schools, and from those, extrapolates common combinations of practices or “models.”

Codifying these models might also clarify demand for technology tools that could support different competency-based approaches. Models that emphasize performance assessment and internships will require different tools than those that rely heavily on blended learning. Distinguishing among the needs of these models might generate better tools for each, rather than one-size-fits-none platforms.

Do we need new paradigms to measure student outcomes and growth?
Such a taxonomy of competency-based models could also start to enumerate the different ways that different systems conceive of student outcomes. Growth in a competency-based system is most straightforward if you have a singular curriculum that students work their way through. For example, at Milan Village School in New Hampshire, students work their way through numbered playlists; there, student growth is reflected in the relative pace at which students master each step of the math curriculum. But as the RAND paper points out, some competency-based schools place more emphasis on “student choice,” including in terms of curricular pathways or student-designed projects. The concept of choice—which I think is equally general and hard to really measure in a meaningful way—introduces a whole new set of challenges around how we measure growth. If students are free to blaze more individualized paths, it will be difficult to compare their progress with other students’ at a single moment in time, much the way research (and current accountability regimes) require. Besides making it harder to compare students at one time, we may also need to be attuned to the “hockey stick” growth effect that some schools competency-based models like MC2 have observed. Students need to learn how to learn in a competency-based model that gives them greater choice; once they do this, they may “take off” in their learning. In other words, there may be a delay, and then a spike in progress as students learn to take ownership of their learning. Much as the RAND report alludes to, this likely requires a longer view of student growth than the “snapshot” approach we often take to assess outcomes.

– Julia Freeland

Julia Freeland is a research fellow in education at the Clayton Christensen Institute. This first appeared on the Christensen Institute’s blog.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College