The word “retraction” sends a shiver down many a scientist’s spine. But what are the real career repercussions for researchers whose papers are retracted? For biomedical scientists, the average impact is about a 10% penalty on future citations to prior papers, according to a National Bureau of Economic Research working paper posted in May. However, for eminent authors who retract due to misconduct, there is a steeper punishment: a future citation penalty of nearly 20%.
“It is valuable to have an actual empirical study that examines this,” says John Budd, a scholarly communications professor at the University of Missouri School of Information Science and Learning Technologies in Columbia, who was not involved in the study. “It’s not easy to assess changes in reputation, but the authors managed to get at least some kind of hint of what the effect of having retractions would be. … It’s a very ambitious paper.”
Author Pierre Azoulay, an economist at the Massachusetts Institute of Technology in Cambridge, believes this type of research is an important part of maintaining—or improving—the integrity of the scientific community. “Science is an institution, and it can work well or not well in terms of deterring crime and malfeasance,” Azoulay says. “The retraction system is part and parcel of the incentive system for the republic of science, and we should understand how it works because that will give us clues for how to make it work even better—or how to make it work at all if it doesn’t.”
Azoulay and his co-authors identified retractions by searching PubMed, a database that focuses on biomedical research and the life sciences. These retractions led them to focus on 376 U.S.-based authors who as of 2009 had at least one retraction of a paper published between 1977 and 2007. The researchers then looked at how the retraction affected the number of citations to these authors’ past work. A retraction wouldn’t change the quality of these already published works, so if the citation pattern subsequently changed, it would suggest that the retraction affected the researcher’s reputation.
Altogether, the authors that were studied published some 23,620 papers prior to their first retraction, and Azoulay and his team determined the total number of citations that each author garnered with his or her papers before and after the retraction event. They compared these citation numbers to those of 759 control authors who had published in the same journal and issue as a retracted author and found that, on average, authors who had a retraction went on to garner approximately 10% fewer citations than the control authors did.
The team also wanted to develop a more nuanced picture, so it broke the data into a few different groups, based on the type of retraction and the author’s status. The researchers determined, as best they could, whether each retraction was due to misconduct or “honest mistakes” by reviewing the retraction notice and associated documents. They also categorized each author as high or low status based on the number of citations they had accumulated prior to the retraction and their funding from the National Institutes of Health.
For all authors, the citation penalty was greater when the retraction was due to misconduct as opposed to an honest mistake—a result that makes sense to David Resnik, a bioethicist at the National Institute of Environmental Health Sciences in Research Triangle Park, North Carolina. Misconduct is “a betrayal of trust” and “a betrayal of the most important value in science, and that’s the truth,” Resnik says. Honest errors, on the other hand, “are more excusable because that’s just sometimes what happens. People aren’t perfect, they make mistakes, and they can be forgiven for mistakes, but doing something intentionally wrong is much harder to forgive.”
Azoulay also suggests that the harsher penalty for misconduct may help keep the institution of science running smoothly. “There’s an argument, which I don’t fully buy but that could be made, that whether there was intention to deceive is irrelevant, and the only thing that matters in meting out punishment is whether one should build on the results or ignore them, … but if you want to inculcate the norms of proper behavior and to deter and maybe filter out of science the people who are more likely to engage in misconduct, then that differential penalty is warranted.”
According to Azoulay’s results, the effect is strongest for the high-status authors, with the penalty reaching nearly 20% for those with a retraction due to misconduct as opposed to about 10% for honest error. “The cynical view that bigwigs always manage to emerge unscathed is just not true,” Azoulay says.
As Benjamin Jones, a professor at Northwestern University’s Kellogg School of Management in Evanston, Illinois, puts it, “the greater you are, the farther you can fall.” Jones’s work has shown that, for single retraction cases—thereby excluding the more extreme cases of misconduct that can lead to multiple retractions—among the members of an author team, more eminent authors experience a smaller citation penalty than do their less eminent coauthors. Combining these results with the results from the current work, Jones says, “creates a nuanced picture of how eminence is protective when there is uncertainty, but when there’s a real scandal, being eminent doesn’t help.”
The obvious question is whether these results may actually act as a deterrent for other scientists. “One way to think about it is you can survive a scandal,” says John Walsh, a public policy professor at the Georgia Institute of Technology. “A 20% drop of my old papers is a penalty, but it’s not a death sentence, it’s not ostracism.”
And citations are just a small part of the career trajectory picture. If a retraction causes a researcher to lose his or her job, funding, or ability to attract students, postdocs, and collaborators, citations to papers published before the retraction are largely irrelevant. “The core punishments have to do with career prospects of these people after a scandal,” Walsh says, and by focusing just on citations, the current study doesn’t fully address that issue. “It would be interesting to see what it does to your future papers, not just what it did to your past—but that’s a second paper.”
Walsh and Barabási agree that the results could encourage researchers whose results are invalidated because of honest mistakes to act promptly to moderate the possible negative repercussions of a scandal. “Like many political scandals, it could be that the cover-up is worse than the crime,” Walsh says. If someone identifies a problem in your work and you go back and realize that there was contamination, you made an error, or there is some other issue that caused the results to be unreliable, then just be honest about it. “You’re going to get a punishment, but you can think of it as sort of a fine”—which won’t end your career.