The Fudge and the Fraud
There’s a pretty interesting article in the Atlantic today on research fraud in a particular corner of the business school universe…
Anonymous surveys of various fields find that roughly 2 percent of scholars will admit to having fabricated, falsified, or modified data at least once in their career. But business-school psychology may be especially prone to misbehavior. For one thing, the field’s research standards are weaker than those for other psychologists. In response to the replication crisis, campus psychology departments have lately taken up a raft of methodological reforms. Statistically suspect practices that were de rigueur a dozen years ago are now uncommon; sample sizes have gotten bigger; a study’s planned analyses are now commonly written down before the work is carried out. But this great awakening has been slower to develop in business-school psychology, several academics told me. “No one wants to kill the golden goose,” one early-career researcher in business academia said. If management and marketing professors embraced all of psychology’s reforms, he said, then many of their most memorable, most TED Talk–able findings would go away. “To use marketing lingo, we’d lose our unique value proposition.”
We have built a system in which success in the profession, and indeed survival in the profession, is built around an individual’s capacity to publish a handful of articles in peer reviewed journals during the two-tiered period of their apprenticeship. The appropriate distribution of such publications through graduate school and through an assistant professorship results first in a job offer and, some six years later, tenure. Service and teaching aren’t exactly irrelevant but they’re a lot less important than establishing this record of publication. Under these circumstances it would be shocking if there was no fraud; the incentives to generate results are extraordinary and the guardrails aren’t terribly difficult to manage. The business school universe has outsized incentives (and potentially outsized fraud) but it isn’t different in kind from other academic disciplines.
Peer review is conducted by experts in the field who do not necessarily have specific expertise in the dataset or the minutiae of the methodology. Indeed, the methodological arms races (fueled largely by the availability of ever more powerful software for data analysis) will often leave senior peer reviewers with no sound way of evaluating the soundness of particular methodological decisions. It is not difficult to hide data malfeasance from an unpaid peer reviewer who has no particular incentive to dig deep enough to find the problem, especially when finding the problem could generate a cascade of problems for the author, the colleagues of the author, and the advisors of the author.
It’s easy to imagine how cheating might lead to more cheating. If business-school psychology is beset with suspect research, then the bar for getting published in its flagship journals ratchets up: A study must be even flashier than all the other flashy findings if its authors want to stand out. Such incentives move in only one direction: Eventually, the standard tools for torturing your data will no longer be enough. Now you have to go a little further; now you have to cut your data up, and carve them into sham results. Having one or two prolific frauds around would push the bar for publishing still higher, inviting yet more corruption. (And because the work is not exactly brain surgery, no one dies as a result.) In this way, a single discipline might come to look like Major League Baseball did 20 years ago: defined by juiced-up stats.
I suspect that what we term “fraud” is almost always “fudge;” a methodological decision made here or a number excluded there so that the regression fits and the p<(x) gold standard can be met. Even successful papers are read by relatively few people, and the fudge is never found, or never commented upon even if it was found. I strongly suspect that many of the Fudgers worry a great deal that their work will be cited by too many people; the peer reviewed article serves its purpose as a line on the CV but if it becomes too popular problems (intentional or accidental) have a higher chance of coming to light. It’s also worth noting here that it’s not difficult to fudge fraud qualitative research. A document in a dusty archive that disconfirms the central thesis is disregarded; an interview with a subject that puts the entire project at risk is ignored; unsupervised field work in a distant locale is fabricated or exaggerated beyond recognition.
While I am always inclined to blame the administrative class for the problems of the modern university, this one is on the faculty. The system of hiring and promotion that we designed and that we have propagated is more or less guaranteed to produce fraudulent work. I find that I have remarkably little faith in contemporary academic research at this point, not just because of the fraud and the fudge but also because of the other incentives that this system has generated (salami slicing, for example). I’ve said it before and I’ll say it again; friends don’t let friends enter PhD programs in the humanities or the social sciences, and good mentors don’t guide good students into those programs, either.