Careers’ sister site, Insider, reported yesterday afternoon that 150 scientists and 75 scientific groups have co-signed an open letter protesting what they claim is an overreliance on journal impact factors by funding agencies, academic institutions, journals, and organizations that provide publication metrics. Developed by Thomson Reuters, journal impact factor measures a journal’s purported importance by gauging how frequently other journals cite the papers that it publishes. “The Journal Impact Factor is frequently used as the primary parameter with which to compare the scientific output of individuals and institutions,” the letter says. But that’s a misuse of the metric, it argues: Journal impact factor was initially developed to help librarians determine which journals to subscribe to, not to rate individual scientists in competition for jobs or grants.
Hiring committees and funding agencies should also give more consideration to a scientist’s nonpublication contributions, the letter argues, such as data set publications, reagents, software development, patents, and the training of new scientists.
The journal impact factor was initially developed to help librarians determine which journals to subscribe to, not to rate individual scientists in competition for jobs or grants.
The signatories—includingEditor-in-Chief Bruce Alberts, who wrote an editorial on the subject in this week’s issue of —make 18 recommendations, most of which discourage journal editors and hiring managers from considering only the prestige of the journals that applicants have published in and ignoring the content and impact of his or her actual research. The recommendations also encourage hiring managers and funding agencies to pay attention to an applicant’s nonpublication “research output”—language reminiscent of the National Science Foundation’s recent revisions to its Grant Proposal Guide that direct applicants to list research “products” instead of publications.
Alberts’s editorial also notes that relying on journal impact factors to evaluate a scientist’s research output “creates a strong disincentive to pursue risky and potentially groundbreaking work, because it takes years to create a new approach in a new experimental context, during which no publications should be expected.”
Not all publishers agree with the letter, though. An article inpoints out that the editors-in-chief of two other prominent scientific publishers, Nature Publishing Group and Elsevier, declined to sign the letter, although they agreed that journal impact factors shouldn’t be used to evaluate individual scientists. Quoted in thearticle,Editor-in-Chief Philip Campbell said, “the draft statement contained many specific elements, some of which were too sweeping for me or my colleagues to sign up to.”
More information on the letter can be found in the ScienceInsider article here.