The number and reach of phony scholarly journals that will publish almost anything in exchange for a payment, without regard for quality or the bother of subjecting the work to real peer review, has been growing, as our colleague John Bohannon reported recently. In 2014, predatory publishers took in $75 million in fees from authors, either unwittingly or deviously, he notes. To help researchers avoid the clutches of these bogus so-called predatory publishers, a website called Think. Check. Submit. seeks to educate the unwary about how to decide which journals to submit their work to. Sponsored by an international group of publishers and libraries, the site offers a checklist for evaluating a journal’s legitimacy.
“It is up to you to decide which journal is right for your work – the check list is designed to help you identify which journals in your discipline are trustworthy, and will help your research have the maximum impact on your career,” the site states. “The aim of Think. Check. Submit. is not to refer researchers to a particular set of journals, which will fall out of date quickly. Instead, we are focused on helping researchers make the most informed decisions possible about where to publish their work.”
As the papers in ersatz journals multiply, so do those papers published in perfectly legitimate and respectable journals, creating their own complications for the research community.
As the papers in ersatz journals multiply, so do those papers published in perfectly legitimate and respectable journals, creating their own complications for the research community. The volume of real scientific publications has become so great that it now “has important implications for the integrity of modern biomedical science,” write Sabina Siebert, Laura M. Machesky, and Robert H. Insall of the University of Glasgow in the United Kingdom, in an article in the online journal eLife. Interviews the authors conducted with “senior biomedical researchers … revealed a perceived decline in trust in the scientific enterprise, in large part because the quantity of new data exceeds the field’s ability to process it appropriately.”
The surplus of information, which the authors term “overflow,” has raised “a serious issue in the way our interviewees assess the quality of science,” they explain. “[C]oncerns about quality of scientific outputs” are growing, in part because “scientists often use reputation—of their colleagues or of a journal, for example—as a proxy for trustworthiness.” As both the mass of material to evaluate and the number of labs and researchers proliferate, the risk increases that scientists will overlook or undervalue important work done by scientists without well-established reputations, leading to the loss of important insights.
“The day-to-day experience of scientists is filled with devices aimed at managing overflow,” the authors continue. “Most obvious are the impact factors of journals, which are widely agreed to be unrepresentative[,] yet they are widely used as proxies for scientific importance in decisions about hiring and funding. Even more pernicious measures are becoming widespread, such as moves to assess the performance of academic staff by measuring the amount of grant money they bring in.”
Finding better solutions to the problem of overflow is an important task facing the research community. The eLife article suggests some possibilities, but it appears more effective at describing the challenge than at solving it. It nonetheless discusses an issue that deserves widespread attention in the scientific community.
In a sense, both the challenge of overflow and the existence of predatory journals have at least one cause in common: the need for academic scientists to maintain a high publication rate in order to build reputations, win funding, and secure jobs or promotions.