
Every field has a technique that’s, shall we say, uncomfortable. Something simple, effective, and powerful. But also something professionals quietly avoid.
In reading instruction, that technique was phonics. In mental health, it is behavioral activation.
An Inconvenient Finding
Behavioral activation, or BA, is almost offensively simple. You help people do more things that generate positive reinforcement and reduce avoidance. You schedule activity—hopefully pleasant activity. You act first—get the patient off the couch—and let mood change follow.
In the 1990s, psychologist Neil Jacobson asked a dangerous question: What if the cognitive part of cognitive behavioral therapy—the “C” in CBT—was not doing the work? What if behavior change alone—just the “B”—was enough?
So, he tested it. He stripped CBT down to its parts and removed the “C” part where you challenge bad thoughts. The result was awkward for the field: behavioral activation on its own worked just as well.
That finding replicated. In fact, in this trial, BA beat CBT and was as effective as medications. BA was simpler, required less abstract self-monitoring, and could be delivered as a standalone treatment.
This was the phonics moment. The evidence was not subtle.
How the Field Reacted
If mental health care worked like engineering, the response would have been obvious: Dramatically expand this technique.
Instead, the field absorbed the threat rhetorically and neutralized it operationally.
Behavioral activation was folded into the CBT brand. Training programs began to say they included BA. Manuals mentioned activity scheduling, and everyone nodded.
This is what balanced literacy looked like in reading: everyone claimed to teach phonics, but the actual instruction drifted elsewhere. The same thing happened to BA.
Why Professionals Do Not Love BA
Behavioral activation has the same problem phonics had. It looks unsophisticated. It does not require deep insight or flatter professional identity. Often you’re nudging people to . . . go for a walk. It does not depend on years of interpretive training.

Studies have shown BA can be delivered effectively by less specialized staff without worse outcomes. That finding should be a public health triumph. Instead, it creates professional unease.
If BA works just as well as CBT, then at minimum you would expect therapists to explain each option to patients neutrally and ask about their preferences.
BA Loses Its Champion
Jacobson had the authority to challenge CBT from inside the tent. Sadly, he died in 1999 at age 52.
After Jacobson, BA had evidence on its side but no enforcer. It’s kinda there, like phonics circa 2015. There was no one left to say: this is the engine driving outcomes, not the garnish making them look good.
Evidence still came in. This from 2016:

This COBRA trial found in the United Kingdom found BA was just as effective as CBT in treating adult depression at a twelve-month follow-up, and cheaper to deliver, using lesser trained staff.
Or this from 2023, a meta-analysis examining 22 RCTs. But the evidence didn’t dent real life practice.
Phonics suffered the same fate for decades. Researchers like Jeanne Chall were right early, but they did not move systems.
Emily Hanford was the phonics breakthrough, with her jaw-droppingly popular podcast. Robert Pondiscio and Riley Fletcher told that story well here in Education Next in 2024.
Behavioral activation is still waiting for its equivalent moment.
What the Numbers Say
It’s hard to find a precise number for the application of BA. How many therapy sessions are truly devoted to it, rather than just throwing it in for a minute or two?
In England, the National Health Service did publish 2015 data.
- Total attended treatment appointments: 3,576,565
- Behavioral activation treatment appointments: 62,447
So BA’s share of treatment appointments in England that year was 1.7 percent
Anecdotally, in our work with hundreds of U.S. teens in the past year, when asked about the type of any therapy they’d received, exactly zero reported treatment with BA.
What Lies Beneath
Across fields, systems prefer theories that flatter professionals over techniques that reliably help strugglers. Phonics versus whole language. Basic math fluency versus mathematical thinking. Behavioral activation versus cognitive therapy.
The methods that hold up under scrutiny sometimes are simple and unglamorous.
An Immodest Proposal
So, here is what we plan to do next.
We are a small startup lab called the Center for Teen Flourishing. We have tested many K–12 instructional interventions at scale over the years. However, in mental health, we are novices!
Rather than pretending otherwise, we will partner with experienced clinicians and methodologists for this work. But we will bring something the field has been oddly reluctant to test: curiosity about the revealed preferences of teens themselves.
Phase 1: Ask teens what they want. We will present adolescents with clear, neutral descriptions of major evidence-based therapies. Cognitive behavioral therapy. Acceptance and commitment therapy. Behavioral activation. No marketing language allowed, just what the work actually involves.
We’ll also show three one-minute video intros from certified therapists.
Then we will ask a simple question: Which therapy do you want, and from whom?
To the best of our knowledge, this is rarely done in a systematic way. A therapist is often assigned. And sure, the therapist will ask about goals, but we can’t find any that allow the teen to drive the decision.
Richard Reeves has observed that boys are much less likely than girls to seek mental health treatment. That tracks with our experiences. We hypothesize that boys, in particular, may be drawn to behavioral activation—not because it is easier, but because it emphasizes action over introspection. Less examining feelings and thoughts. More “do something this week.”
EdNext in your inbox
Sign up for the EdNext Weekly newsletter, and stay up to date with the Daily Digest, delivered straight to your inbox.
Phase 2: Deliver therapy, do an RCT. How well do CBT and BA work with this population? (BA has been studied with adolescents, but in very small trials). And does matching teens to their preferred therapy and therapist improve engagement and outcomes?
We’re also curious about other things. Can you tinker with BA so the focus is on activities with “big chunks of hours”—asking not just what’s a one-off thing you want to do (like “go fishing”) but what can become a weekly routine, productively eating up six hours a week?
Because if that happens, we set sights on another potential win: reduced screentime. Seems like there would be a big benefit for a hobby that sticks (rock climbing, playing guitar), a part-time job, a structured sport with year-round practices and community (volleyball). As best we can tell, BA research has not gone deep here.
There will be technical issues to solve. One is how to balance our two goals: testing whether preference-matching improves outcomes while also maintaining randomized comparison groups to assess which therapy works better.
One solution is to randomize within each preference group—teens who prefer BA would be randomly assigned to receive either BA or CBT, along with a usual-care comparison group where appropriate.
Another technical challenge would be measuring whether BA or CBT reduces screentime: self-reports here are notoriously inaccurate, and screentime tends to be spread across multiple devices (phone, laptop, watch, TV).
Phase 3: Invite predictions about our experiment. We will do something else unusual: ask non-participating therapists and high school counselors to register their predictions. Which treatments do they believe will work best for which teens? Our hypothesis is simple: People who publicly commit to predictions are more likely to revise their beliefs when data arrive.
Think of it as a learning mechanism. After all, we estimate there are something like 250 million teen therapy sessions per year in the U.S. A few small, durable improvements in treatment could have a large total impact.
Follow-Up Experiments
If early results are promising, we will run follow-up experiments. Do simplified BA protocols outperform more complex versions? Does BA delivered by less specialized staff retain effectiveness for teens? Does preference matching reduce dropout more than symptom severity predicts? None of this research has to happen in one shot.
We are writing this now because we want a reaction. We want critics to tell us we are wrong, naive, or wasting our time. And we want allies to commit to helping us run, staff, or fund the experiment, or measure the results. The answer will come from testing, transparently, with outcomes that are hard to ignore.
We are eager to report back on what we learn.
Sean Geraghty and Mike Goldstein are the co-founders of the Center For Teen Flourishing.

