What if a huge chunk of scholarly research is a pointless exercise pursued by hobbyists who like the perks? That’s decidedly not the argument made by Harvard Business School’s Max Bazerman in Inside an Academic Scandal (The MIT Press), but it was the thought that lingered for me after I set down his pithy, engaging new book.
Bazerman has penned an insider account of the infamous “signing first” scandal, in which esteemed researchers who’d faked the data behind an influential 2012 paper were finally outed a decade later. The paper, which Bazerman co-authored, had concluded that asking people to promise to tell the truth before they filled out a form—rather than after—led to a huge increase in honesty. Except, as it turns out, it didn’t.
Several years later, non-fraudulent research eventually found the phenomenon of “signing first” had no effect. Now, if you’re thinking, “Seems weird that anyone believed it would have a big effect,” I’m with you. But governments, companies, and the like had already jumped on this absurdly quick-and-easy solution (while hiring the authors as expert consultants).
The two culprits, Dan Ariely (Duke) and Francesca Gino (Harvard), managed to build hugely successful careers on the backs of massive, recurring fraud—and right under the noses of their co-authors and colleagues. A big part of the book is about Bazerman’s angst that they so easily did so. Ariely and Gino were eventually outed by three dogged scholars who’d founded the academic watchdog blog Data Colada. (For their trouble, the three were promptly sued by Gino.) Bazerman’s account is dotted with revealing details about the affair, including that Ariely and Gino wrote a book chapter—“Dishonesty explained: What leads moral people to act immorally”—that was heavily plagiarized from various sources, including doctoral theses.
Bazerman seems like a thoughtful, sincere guy who’s taken the “signing first” scandal to heart. I came away impressed by his willingness to dig in and acknowledge the problems. He’s a lifelong academic who’s clearly committed to protecting social science from screwy incentives and bad actors. He notes that the Gino-Ariely scandal, among others he highlights, are shockingly clear-cut cases of fraud, made possible only by inattention and misplaced trust. He frets that plenty of other fraudsters may be going undetected, especially if their machinations are more sophisticated than the blatant fabrications for which Ariely and Gino were busted.
For Bazerman, the big takeaway is that social science must more aggressively embrace the kinds of sensible reforms promoted by the “open science” movement. He heaps disdain on “p-hacking,” calls for more responsible institutional stewardship at universities and journals, and celebrates measures like preregistering hypotheses and creating platforms on which researchers can transparently document their data collection.
This is all admirable enough. And through it all, Bazerman retains a touching faith in social science. As he wistfully asserts, “The first two decades of the new millennium were an inspiring time for social science. The media became fascinated with our latest findings, and they shared them in ways that intrigued the public. Governments changed policies based on what we discovered.”
Bazerman has no time for those who contend that social science’s track record on “misinformation,” diversity and inclusion, or the Covid response demonstrates the field is a dubious guide to social betterment. He dismisses the “growing science-denial community” and has no doubts about the academic enterprise he’s traversed with such success.
I, however, do have doubts. After reading his account, my takeaway is very different from Bazerman’s. I couldn’t help but think his faith is misplaced. To start with, many of the studies he references in the course of the book strike me as unnecessary or simply pointless. A (hugely incomplete) list of the published studies includes those that examine whether counterfeit products make people feel insecure; whether increasing one’s “perceived” height, such as by riding an escalator, leads to more altruistic behavior; whether networking leads people to think of words related to cleanliness; whether messy workplaces are more productive; whether commercials with skinny models are less effective than those with other models; and whether people thinking about death eat more candy. These aren’t studies Bazerman’s spotlighting but rather a sampling of the scholarly research he touches upon in the course of his narrative. It’s telling that he seems to see such studies as unexceptional.
To my jaded eye, such research seems less like “science” and more like “academics amusing themselves in polite company.” Indeed, Bazerman relates an almost too-perfect illustration of this dynamic. A doctoral student whose thesis included an extended critique of Gino’s networking/cleanliness study (which was also later found to be fraudulent) was advised by a member of her dissertation committee to delete the section. Why? Because “academic research is like a conversation at a cocktail party,” and her critique would be seen as rude and inappropriate. However inane we might find the research question, remember that Gino’s study was considered “real” social science, published by an esteemed scholar in a prestigious academic journal. And I haven’t even touched on the faddish, data-free, critical-theory argle-bargle that constitutes such a big chunk of academic publishing.
I’m left wondering how many research studies are just a playground for a privileged caste of credentialed scribblers to amuse themselves and build comfortable careers, all with the aid of hefty public subsidies. Scholars certainly don’t think so. They tell us research is a dynamic endeavor and we have to trust that these explorations are how we surface unexpected, important truths. But should we actually buy that? I’m inclined to think that William Proxmire had a point with his “Golden Fleece” awards, and that we’re way overdue for a serious conversation about the kinds of research that merit public support.
Bazerman laments that even the universities don’t seem to take research outcomes all that seriously. It’s hard to when you prioritize PR and legal considerations over transparency. For instance, when (ethics scholar!) Ariely’s fraud came to light, Duke University’s only response was to quietly have him complete an eight-week professional ethics course. (Of course, Duke itself had recently been fined $112 million for using falsified data to win $200 million in federal funding.)
I know I sound like a broken record, but it’s hard to ignore the opportunity cost of all this. Gino, for instance, published more than 130 papers between 2007 and 2022—of which dozens appeared to be plagued by falsification and misconduct. Meanwhile, Bazerman recounts, “Gino made little time to meet with doctoral students, often failed to show up for meetings, canceled meetings at the last minute, and sometimes called [her colleague] Julia at the last minute to ask Julia to cover her teaching obligations.”
What exactly was this Harvard professor (and fount of falsified research) doing instead of teaching or mentoring? Bazerman explains that the “division of labor” meant that junior members of her team “directed the work and mentored students, while Gino offered occasional input, paid the bills, and used her resources and connections to promote the work.”
Not only does all this raise major questions about the utility of social science research, it also casts serious doubt on its reliability. Bazerman describes another of this century’s more infamous academic scandals, which unfolded a decade ago in the Netherlands when hotshot Tilburg University social psychologist Diederik Stapel churned out scores of papers with doctored or fabricated data. Stapel had a hypothesis: that looking at pictures of an attractive person would affect self-image negatively. (Why this needed to be researched at all, much less by a publicly subsidized scholar rather than a bored marketing intern at Estée Lauder, isn’t clear to me.) In any event, Stapel was sure he was right, “but the actual data didn’t support it.” Consequently, Bazerman relates, “Stapel sat at his kitchen table and began typing numbers into his computer that would produce the intended effect.” His study was published in the prominent Journal of Personality and Social Psychology in 2004. Before being discovered, Stapel committed fraud in at least 55 papers, and his fictional data was used in ten PhD dissertations.
For Bazerman, Stapel’s folly is a terrible abuse of science. I agree. But, even if Stapel’s numbers had supported his hypothesis, I wouldn’t be all that impressed. I wouldn’t have come away convinced that Stapel surfaced some important, fundamental truth about human nature. More likely, I’d have thought it was a silly question and wondered about the soundness of his research design.
Now, I don’t mean this as some kind of anti-research screed. There are, of course, purposeful, comprehensive, data-conscious research enterprises that are attempting to answer questions of pressing social import. (This is the kind of scholarship that we celebrate at EdNext.) But, in Bazerman’s description of Stapel, I couldn’t help but think of all the thousands and thousands of social scientists who spend hours each day hunched over laptops playing with data files that they didn’t collect, don’t fully understand, and frequently take on faith. They don’t know exactly how the data was obtained, the vagaries of the collection, or how sturdy it is. How confidence can we be in the results that get spit out, even when they’re “statistically significant”? I’d argue: A lot less than we typically are.
And it’s not like the researchers invested in these projects are scrupulously asking, “Is this true?” Rather, as Bazerman notes, the incentives to pump out papers or make a splash can lead to all manner of shortcuts. He points out that even esteemed scholars rarely review their co-authors’ data, because division of labor is a recipe for speed. They delegate much of the data collection to doctoral students because that helps move things along. This blind faith in data files is baked into the academic formula for grants, jobs, influence, and professional success (whether or not the results can be trusted).
Subscribe to Old School with Rick Hess
Get the latest from Rick, delivered straight to your inbox.
The problems with “qualitative” research are familiar. Scholars can treat the methodology as a license to substitute ideology for inquiry or to shade their “findings” and narratives in the service of professional or political aims. But what’s hard to miss in Bazerman’s account is how the reassuring stolidity of systematic data collection, spreadsheets, and algebra can lend an undeserved patina of science and certitude to work deemed “quantitative.”
Earlier this year, on the Substack “Secrets of Grimoire Manor,” psychologist Christopher Ferguson argued, “Most social science research is a waste of time. . . . People get into this field to find cool and important results about human behavior. To admit that most of what we do accomplishes no such thing is understandably embarrassing.” Well, yeah.
I’m left thinking that much of today’s institutionally supported, professionally sanctioned social science could perhaps be more appropriately understood as a reasonable but inconsequential hobby, like collecting Pokémon cards or sports memorabilia. I suspect the need for this flavor of scholarship has been oversold, and the buzz it generates doesn’t yield anything useful or even true. I’m not suggesting there’s anything wrong with studying frivolous questions. I am suggesting that doing so should be a private activity for hobbyists, dilettantes, or employees of private enterprises—not a time-consuming task for educators at publicly subsidized institutions.
I’ll give Daniel Kahneman, the late Nobel-winning psychologist, the last word. In 2023, Kahneman wryly allowed that the past decade had left him jaded: “When I see a surprising finding, my default is not to believe it.”
Frederick Hess is an executive editor of Education Next and the author of the blog “Old School with Rick Hess.”