a large library with tables and chairs

Tilburg University professor Diederik Stapel was something of a star in the field of social psychology. His research on human behavior had achieved renown both in the Netherlands and abroad. One of his studies claimed that eating meat caused selfish and antisocial behavior. Another suggested that a “trash-filled” environment brought out racist tendencies in people. 

In September of 2011, Tilburg University suspended Stapel on suspicion of using fictitious data. Several of his colleagues complained that his numbers were just too good to be true. Young research assistants noted that Stapel refused to show them the raw data behind his research. After an investigation, the Levelt Committee released a report claiming that Stapel had fabricated or manipulated data in at least 30 papers in peer-reviewed journals.

Cases of Fraud in Academic Publications over Time. Source: The Cambridge Journal. Data via Fang Labs.

Stapel’s case is indicative of a growing trend in academia. According to Fang Labs, reported instances of fraud or suspected fraud have tripled from 2002-2006 to 2007-2011. 

The cause of this sharp uptick in academic fraud is unclear. It could simply be better online policing. Thanks to online publication, scholarly works are now subject to a higher level of scrutiny. New plagiarism detection software and a vigilant online community has made falsifying data difficult. All this contributed to putting Diederik Stapel behind bars in the ivory tower.

The actions of researchers like Diederik Stapel represent more than the existence of ambitious individuals willing to deceive to further their career. The Levelt Committee report states:

“[The Stapel case] involved a more general failure of scientific criticism in the peer community and a research culture that was excessively oriented to uncritical confirmation of one’s own ideas and to finding appealing but theoretically superficial ad hoc results.” 

It is difficult to fully substantiate this claim; however, academics widely discussed this failing in recent years. One often-cited aspect of academia is the “publish or perish” culture. Universities place immense pressure on faculty to publish more and more papers in peer-reviewed journals. Imperial College London’s medicine department, for example, had a policy for faculty to “publish three papers per annum including one in a prestigious journal with an impact factor of at least five.” Peter Higgs (of Higgs boson fame) told The Guardian that because he had relatively few papers published, Edinburgh University would have probably fired him if he had not been nominated for the Nobel prize in 1980. 

Earning faculty positions and tenure rely on publishing productivity. But what does it take to get published in a reputable journal? 

According to experimental psychologist Chris Chambers, high-impact journals (particularly in the field of psychology) look for results that are “exciting, eye-opening, even implausible.” Novelty pieces. As psychologist Joseph Simmons told the science journal Nature:

“When we review papers, we’re often making authors prove that their findings are novel or interesting. We’re not often making them prove that their findings are true.”

Results not only need to be novel and exciting, they must also be correct. Students learn in elementary school that the scientific method involves investigating hypotheses. And hypotheses don’t always turn out to be right. Literature analysis by Nature, however, found a pervasive tendency for journals to only publish “positive” studies. Papers that bear out the tested hypotheses represented 90% of all articles. Failure has a place in science, but not in science journals.

Retractions of Publications from PubMed, 2000 to 2009. Source: The New York Times. Data via the Journal of Medical Ethics

The demands of institutions and journals have arguably created unhealthy incentives to value outcomes over sound research methods. As the number of reported cases of fraud has rapidly increased, so has the number of retractions — whether due to fraud, fabrication, an honest mistake, plagiarism, or irreproducible results. A study of the PubMed database found 180 retractions of publications in 2009, up from 3 in 2000. 

Subpar scientists are not responsible for this shoddy science; it stems from the most rarified institutions. A study by Dr. Ferric Fang and Dr. Arturo Casadevall compared the retraction rates in 17 journals to the journals’ “impact factor,” a measure of prestige based on how often papers are cited in other work. They found that “the higher a journal’s impact factor, the higher its retraction rate.”

Again, it is possible that this increase is caused by a stronger online watchdog culture. But regardless of whether the fraud is new or newly discovered, the case of Diederik Stapel reveals the ugly underbelly of scientific research. The pressure to publish frequently in prestigious journals has made it more likely for researchers to cut corners and manipulate data. As Dr. Fang told the Times:

“You can’t afford to fail, to have your hypothesis disproven. It’s a small minority of scientists who engage in frank misconduct. It’s a much more insidious thing that you feel compelled to put the best face on everything.”

We’ve all learned about the scientific method. But there is also an institutionalized process by which scientists fund their research, run experiments or studies, and discuss, analyze, and share the results. With the current process incentivizing deceptive practices, we’d say it’s time to make some improvements

To get occasional notifications when we write blog posts, sign up for our email list.