AI Didn’t Destroy Critical Thinking. We Did.

Now educators must think critically about how to use these transformational tools to help students learn

Illustration

It feels like we’ve reached a critical mass of consensus: AI is just bad for our students. The American Association of Colleges and Universities just released the results of a national survey of U.S. faculty, which found that 95 percent believed AI would “increase students’ overreliance” on such AI tools, and 90 percent believed AI would “diminish students’ critical thinking skills.” This is mirrored in a recent Brookings report, which concluded that “the risks of utilizing AI in education overshadow its benefits.” As one professor (“I’m an AI power user”) put it, “I want to strip things back: no laptops, no phones, just pens and paper.”

It seems everyone wants to find a way to minimize or even forbid AI use, kind of like how cell phone bans and restrictions in K–12 schools have passed in 33 states. The consequences of doing nothing, such narratives proclaim, could be dire. The Brookings report, for example, throws around terms such as cognitive decline, cognitive impairment, and cognitive atrophy—all of which, it notes, are associated with an “unhealthy aging brain.” They quote an MIT brain imaging study that suggests the long-term consequences of AI use may include “diminished critical inquiry, increased vulnerability to manipulation, decreased creativity…[and] risk internalizing shallow or biased perspectives.”

Here’s the problem with all this “Chicken Little” hysteria. Four years ago, before any of us had a clue about weird acronyms such as GPT, LLM, or AGI, every education expert I know was bemoaning students’ continued lack of academic competence. NAEP has for decades documented how just a small percentage of U.S. students reach even a “proficient” level in their reading and writing and that, compared to other countries, U.S. students consistently are middle-of-the-pack. Results from the Collegiate Learning Assessment incited two prominent scholars to conclude that college students were “academically adrift” and learning almost nothing across their years in college.

AI, in other words, did not erode critical thinking; it exposed how poorly we have been teaching it.

Let me be blunt: There was no golden age of critical thinking or academic achievement before AI came along and seemingly ruined everything. In the years before ChatGPT arrived, K–12 educators said some of their most pressing concerns were that schools were boring and that we didn’t know how to talk to each other; college leaders worried that they lacked the ability to strengthen students’ critical thinking, communication, or problem-solving skills to successfully enter the workforce.

So, sure, I understand today’s basic argument: Maybe using AI in the wrong way will make all this even worse. Trust me, I’ve been there. I was ready to give up and walk away as I saw AI supercharge a disengagement spiral that turned my college classroom into a transactional mirage of learning.


EdNext in your inbox

Sign up for the EdNext Weekly newsletter, and stay up to date with the Daily Digest, delivered straight to your inbox.


But here’s the thing: The emergence of AI truly marks a transformation akin to a Copernican revolution in education. This is because AI has given us the chance to implement a type of powerful personalized learning that we have only dreamt of, a potential reality that education theorists have spent decades developing key concepts and core theories about (e.g., ubiquitous learning, situated learning, legitimate peripheral participation, distributed cognition). The problem is we’ve never been able to implement this vision faithfully within the institutional constraints of our education systems. And revolutionary moments, like all transformations, create massive disruptions.

The solution, though, is not to pretend these disruptions don’t exist, nor is it to bemoan that the sky is falling. Instead, we need to embrace them.

I, for example, have finally figured out how to help my students use AI as a daily tutor, Socratic conversation partner, and writing mentor. I walk my students through the ethical use of AI and how—if prompted correctly and used deliberatively—it can help them think carefully and thoughtfully about some of our most complex and contested societal issues. So rather than face a passive and disengaged lecture hall of 70 students, I watch them write daily reflections such as this: “Overall in this course I have noticed that we are being taught how to think rather than what to think and I think that AI and been a great tool during this process.” Many other researchers and faculty are experimenting with how to make AI a catalyst for learning rather than a ghost writer for outsourcing thinking.

Recent handwringing about the loss of critical thinking skills, I would therefore suggest, says far more about how we teach than how our students learn. If we really care about saving students’ critical thinking skills, we need to think critically ourselves about how to re-envision our education systems with the right guardrails and guideposts to leverage AI-driven tools rather than disengage from this transformational moment. Prohibitions and nostalgia for a pre-AI world are the real dangers that will result from a failure to think critically. Instead, educators’ embrace of AI as a transformational tool is what will make a world of difference.

Dan Sarofian-Butin was the founding dean of the School of Education and Social Policy at Merrimack College and is now a professor of education there.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Email Education_Next@hks.harvard.edu

Copyright © 2026 President & Fellows of Harvard College