Of AI and Our Desire to Believe the Kids are Alright

We need thoughtful commentary on tech’s implications for learning. John McWhorter’s shrug of an Atlantic essay was anything but.

I usually find John McWhorter, Columbia linguistics professor and New York Times columnist, a serious thinker, insightful writer, and fount of good sense. But I was brought up short by his recent Atlantic essay “My Students Use AI. So What?”, which struck me as glibly unserious and the embodiment of a kind of seductive tech surrender that history has not treated favorably.

In the piece, McWhorter shrugs off concerns about ubiquitous phones, social media, and AI shortcuts: “Plenty of cultural critics argue that this is worrisome—that the trend of prizing images over the written word, short videos over books, will plunge us all into communal stupidity. I believe they are wrong.”

Even in his “book-crammed home,” he writes, his tween-age daughters’ eyes are “likely to be glued to a screen.” While he’s started limiting how much “digital junk” his daughters consume, McWhorter argues that dismissing “online clips as crude or stupefying” is to miss “the cleverness amid the slop.” Indeed, he proudly notes, “My girls are wittier than I was at their ages, largely because of all the comedic and stylized language they witness online.”

Photo of Rick Hess with text "Old School with Rick Hess"

McWhorter also reflects on his students at Columbia, who are “relying on AI to read for them and write their essays, too.” He waves off the inevitable concerns: “Who can blame them for letting AI do much of the work that they are likely to let AI do anyway when they enter the real world?” (Ooh, ooh, I know! Could it be their real-world future employers, only 8 percent of whom deem Gen Z applicants ready for the workforce?)

Now, when I contemplate the hand-wringing AI doomerism that McWhorter must surely encounter daily at Columbia and the New York Times, I can appreciate his urge to push back on catastrophism. And he’s right both that culture evolves and that students have long been compelled to read some tedious dreck. There’s something to be said for upsetting this apple cart.

It’s also heartwarming to see how transparently McWhorter cares for his daughters and wants to approve of their interests. Again, I totally get it. Every parent wants to feel confident we’re raising our children to be happy, responsible, and capable. No parent wants to think we’re making (or condoning) bad choices.

But.

This is where parental impulses can work against us. For many decades, when asked to grade their oldest kid’s school, the lion’s share of parents have awarded an A or B—no matter how objectively awful the school may be. Why? Because parents badly want to believe our kids are being served well. Similarly, no matter how poorly students are faring on assessments of reading and math, most parents believe their kids are performing at grade level. Why? It’s partly the fault of grade inflation, but also parents just want to believe their kids are okay.

McWhorter’s essay is an exercise in cloudy optimism and dubious judgment. This is a bad time to resign ourselves to a vacuous, dopamine-driven TikTok culture, given that a decade of academic decline has finally sparked broad-based interest in content-rich curricula, rigorous instruction, and the importance of literacy.

The typically level-headed McWhorter sounds more like a spokesperson for the National Council of Teachers of English, which has urged schools to “de-center book reading and essay writing” in order to elevate stuff like memes and videos. He’d fit right in with education professors who emphasize “meaning-making” over literacy, or education consultants who dismiss content knowledge in the name of “21st century skills.” For those familiar with McWhorter’s work, this is like watching Bernie Sanders cutting an ad for Bank of America.

Portrait of John McWorter in dark gray blazer seated during interview
John McWhorter

McWhorter’s bland confidence in the potential of technology brought to mind the cavalier cheerleading of education reformers 10 or 20 years ago, who were convinced cell phones and social media would revolutionize schooling. As we scramble to reverse the effects of screen addiction, toxic social media, and smartphone ubiquity, it’s easy to forget how energetically we were once urged to embrace these technologies as the future of education.

Policymakers, philanthropists, and education reformers leapt at the promise of “blended learning” and “one-to-one devices” and saw smartphones as a way to move things along. In 2013, Brookings’s Darrell West explained that, since students had phones and “love mobile technology,” schools should “harness” those personal devices to “transform instruction.” In 2014, Lalitha Vasudevan, now managing director of the Teachers College Digital Futures Institute, insisted that cell phones “afford young people the chance to be seen and engaged as actors with a repertoire of literate practices and a sense of agency” and promised that phones could “serve as powerful resources in reconfiguring the educational landscape.”

In a 2012 Edutopia story on new classroom devices, many educators wondered, “How do you keep the students from playing games?” Simple assurances were given: “If students are given engaging, open-ended problems to solve, they won’t want or need to play games.” Easy-peasy.

It didn’t quite work out that way.

Tech enthusiasts also had a bad habit of slighting the importance of academic rigor. A 2006 TIME Magazine cover story, “How to Build a Student for the 21st Century,” captured the zeitgeist. It explained that schools could no longer afford to fixate on math, reading, or content knowledge. The reporters approvingly quoted the then-dean of Stanford’s School of Education mocking the idea that students should still learn South American geography, Civil War battles, or the periodic table of elements given that, as she put it, “You can look it up on Google.”  There was a presumption that we’d entered a new world where the old rules no longer applied.

That familiar tech enthusiasm has now made its way to AI. The Gates Foundation’s K–12 education chief is thrilled that AI tutoring will allow students to pursue “their learning journeys” and shift us from “binary right-or-wrong thinking to curiosity and exploration.” The University of Utah’s Hollis Robbins, humanities scholar turned AI enthusiast, promises that AI will deliver “2.6x to 5x” improvements in learning. Secretary of Education Linda McMahon has asked, “How can we educate at the speed of light if we don’t have the best technology around?” and is therefore enthused that, “There is a school system that’s going to start making sure that first graders or even Pre-K have A1 [sic] teaching . . . that’s just a wonderful thing.” As he signed his executive order, “Advancing Artificial Intelligence Education for American Youth,” President Trump opined, “AI is where it seems to be at.”

Now, let’s be clear: I’m no tech pessimist. For one thing, you can’t work alongside my colleagues John Bailey (at AEI) and Michael Horn (here at EdNext) and be unmoved by their deeply informed, practical sense of AI’s possibilities. For another, I’ve written at length over the years  about the educational promise of tech when used wisely and well (including in a book with Bror Saxberg, of which I’m quite fond). I stand by all of that.

Technology is a tool. Tools can be used poorly or well. Cars are wonderful things for adults. But, used recklessly by a kid, they’re also capable of great harm. The same is true of power drills and electric saws. That’s why we’ve developed norms in which some tools are reserved for use by fully formed adults and recognize that novices need supervision and training from more experienced users. We typically don’t just hand tweens an electric saw or the keys to the car and say, “You go have fun now.” McWhorter seems to have lost sight of that intuition here.

In a world awash with deepfakes, AI hallucinations, and malicious propaganda, there’s indisputable value in ensuring that youth master offline skills and content knowledge before they’re leaning on AI or burning through their free time watching cat videos. That requires parents and educators to set firm, age-appropriate limits on the use of devices and technology. But this is just the kind of heavy hand that McWhorter deems misguided. I disagree. I’m convinced that youth are better served when they spend more time reading books, less time watching TikToks, and have those expectations reinforced by parents and educators who act accordingly.

Students need to read and write essays unassisted by these new tools because that’s how they learn to think critically and communicate clearly. It’s fine for a student to use a calculator once she’s mastered computation. But insisting students can skip foundational skills because, when they’re older, “they are likely to let AI do [it] anyway” is to hobble them for life. Last week, my colleague Robert Pondiscio put it elegantly:

The very people most likely to misuse AI—those with shallow background knowledge, weak discernment or motivation—are the ones most susceptible to its illusions. It’s a knowledge amplifier, not a knowledge substitute. Education is not a product to be delivered; it’s a transformation that occurs through effort. The problem with AI is that it can perform education’s outputs—essays, analyses, answers—without any of its inputs. In sum, it is a powerful tool in the hands of the curious and the motivated but devastating to those merely seeking a shortcut.


Subscribe to Old School with Rick Hess

Get the latest from Rick, delivered straight to your inbox.


To his credit, McWhorter concedes that teaching students to write an essay “is still necessary” and that “we just need to take a different tack.” He points to the value of blue-book essays, challenging prompts, and expectations for in-class participation as strategies that can help. He’s not wrong; it just feels like a half-hearted rearguard action once he’s asserted it’s fine for students to let “AI do much of the work,” since that’s how it’ll be “when they enter the real world.”

I’m reminded of last week’s attempt by Alex Kotran, CEO of the AI Education Project, to address these tensions. Kotran posted an AI-penned Substack column that explained “how two forward-thinking educators are using AI to make their classrooms *more* human.” In theory, this is spot-on: Use technology to customize instruction and personalize learning, making schools less impersonal and more “human”.

The problem wasn’t with the theory; it was with the real-world application. The first example as summarized by Kotran’s AI was a New York City teacher whose students were “disengaged.” The solution? Use AI to turn the unit into a “project-based learning curriculum.” Instead of having to pass a test, students could “demonstrate their knowledge by creating posters, mind maps, graphic novels, podcasts, or even videos and animations.” Kotran’s AI bot deemed this anecdote a “perfect example” of a “powerful answer” to concerns about AI.

Well, maybe. But I’m not sure ditching a test, allowing students to create a “mind map,” and then asserting that students are more engaged necessarily shows that AI is making classrooms “more human”—unless “more human” means lowering academic expectations and having students spend more time making podcasts and videos. (And just to recap: An AI commentary praised as “perfect” an AI-modified unit which allowed students to spend more time on devices.)

Look, if you can survey the state of academic performance, public discourse, college work habits, or youth well-being today and conclude that things are swell, then I don’t begrudge you celebrating the proliferation of viral videos and meme culture.

But if you’re dubious like me, McWhorter’s nonchalance feels less like good sense and more like willful denial. It’s nice to see McWhorter push back on the doomsayers, but I’d have hoped for more than glib reassurance from a typically thoughtful academic. We already get plenty of that from philanthropists, politicians, reformers, and tech impresarios.

Frederick Hess is an executive editor of Education Next and the author of the blog “Old School with Rick Hess.”

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Email Education_Next@hks.harvard.edu

Copyright © 2025 President & Fellows of Harvard College