
I recently had the opportunity to be part of an OpenAI faculty roundtable. I was one of about a dozen professors that were joined by several staff from OpenAI’s recently created “Education Team.” We talked about our best practices for teaching with AI and our worries about its impact on student engagement, motivation, and academic integrity. The Education Team listened, asked questions, and presented their own vision of an “AI Native Institution.”
I hate to admit this, but I left the event feeling really depressed.
Our conversations were all about isolated and idiosyncratic (and, sure, exemplary) pedagogical practices, but completely lacking in big-picture vision—as if all we had to do was better integrate some whiz-bang gadget one student, one faculty, one institution at a time. Yes, I liked how Jeffrey Bussgang created custom GPTs for his entrepreneurship class at the Harvard Business School. And, yes, I thought Stefano Puntoni’s work at Wharton for integrating AI into his students’ writing was interesting. (OpenAI used these examples as “proof of concept”.) But to be fair, most of us sitting around the table have made similar or even better adaptations, and I don’t think any of us feel like we are part of the solution. Rather, we’re all barely keeping our heads above water as we navigate what Ethan Mollick terms a “post-apocalyptic education.”
This is why I believe AI has precipitated a fundamental crisis of purpose in higher education, and I am far from alone in this perspective. So, I expected more from a $300 billion company on the cutting edge of disrupting the world.
This is what OpenAI should have done.
First and foremost, they should have named the correct problem. Everyone thinks the issue with AI is that just about every student is cheating their way through college. Yes and no. It’s true that most students have little intrinsic motivation to learn and find the easiest way through the checklist of courses in order to get their credential.
But the real story is that AI has broken the transmission model of education, where professors teach and then grade students on how much they learned. A passing grade used to mean students had learned enough of what the professor had “transmitted.” No longer. These past two years faculty have given out A’s left and right to students who don’t understand (much less read) the assignment they just submitted. I cannot overstate this: AI has decoupled students’ performance (what they submit to us) and student knowledge.
This is not all bad news; a massive crisis is also a massive opportunity. The second thing OpenAI should have done is tease out the implications of and solutions to this disruption they have wrought. This doesn’t mean reactive and on-the-margins interventions—a return to blue books, watermarking AI output, process tracking, honor code updates—that may temporarily mitigate the problem.
EdNext in your inbox
Sign up for the EdNext Weekly newsletter, and stay up to date with the Daily Digest, delivered straight to your inbox.
Rather, this is about the reinvention of the structures and systems of higher education. OpenAI could have talked about solving Bloom’s two-sigma problem through the power of personalized, on-demand tutoring. They could have talked about scaling such personalization to the more than 9 million U.S. college students taking at least one online course or the 200 million-plus people worldwide enrolled in a MOOC. They could have talked about harnessing this potential to support degree completion for over 36 million adults with “some college, but no credential.” These types of interventions show how AI has the ability to break the iron triangle of higher education, offering high-quality content at a low cost with maximum access.
Let me offer two small examples from my own classroom. I have fundamentally rethought how my students demonstrate their competence, grading them now on a combination of informal reflections and formal outcomes that have real-world relevance. Along the way, I teach them how to use AI as their daily tutor (not their ghost writer). I have also rethought what assigned readings they should do (if any), as their conversations with AI are sometimes much more helpful than any chapter from a textbook. These two vignettes should put the entire textbook and testing industries on notice. If I can adapt this way, I would hope OpenAI could do even better.
AI is not just another shiny new gadget. It is a paradigm-shifting technology. The rise of the printing press in medieval Europe fundamentally altered how people related to knowledge, sparking a centuries-long expansion of literacy and thus the democratization of knowledge. I believe AI is the printing press of our time, again expanding and democratizing the process of learning.
All of us in higher education have long known that the transmission model was deeply flawed. But until ChatGPT was released in November of 2022, we had no viable alternatives. Now we do. What OpenAI should have done, if it wanted to help reshape the future of education, is stop asking how AI fits into the old paradigm by tinkering at the edges and start imagining and investing in what a new model of education could look like.
Dan Sarofian-Butin was the founding dean of the School of Education and Social Policy at Merrimack College and is now a professor of education there.