Sunday, August 26, 2012

"Impact factors" and the damage they do.

The Wall Street Journal ran this article yesterday, which basically talks about how some journals try to manipulate their impact factors by, for example, insisting that authors of submitted manuscripts add references that point to that specific journal.  Impact factor is (according to that article's version of the Thompson/ISI definition) the number of citations of a journal in a given year, divided by the number of papers published in that journal over the last two years.  I've written before about why I think impact factors are misleading:  they're a metric deeply based on outliers rather than typical performance.  Nature and Science (for example) have high impact factors not really because their typical paper is much more highly cited than, e.g., Phys Rev B's.  Rather, they have higher impact factors because the probability that a paper in Nature or Science is one of those that gets thousands of citations is higher than in Phys Rev B.   The kind of behavior reported in the article is the analog of an author self-citing like crazy in an effort to boost citation count or h-index. (Ignoring how easy it is to detect and make corrections,) Self-citation can't make a lousy, unproductive researcher look like a powerhouse, but it can make a marginal researcher look less marginal.  Likewise, spiking citations for journals can't make a lousy journal suddenly have an IF of ten, but they can make a journal's IF look like 1 instead of 0.3, and some people actually care about this.

Why should any publishing scientific researcher spend any time thinking about this?  All you have to do is look at the comments on that article, and you'll see.  Behavior and practices that damage the integrity of the scientific publishing enterprise have the potential to do grave harm to the (already precarious) standing of science.  If the average person thinks that scientific research is a rigged game, full of corrupt scientists looking only to further their own image and careers, and that scientific research is no more objective than people arguing about untestable opinions, that's tragic. 



8 comments:

  1. Anonymous5:50 PM

    Thank you for these clear words!
    But most of the damage to science results from the corrupt peer review system. What counts nowadays: Catchy labels and hypes!
    And such publications find there way into the "popping up" offshoot journals.

    ReplyDelete
  2. Anonymous:
    As someone who reads a LOT of reviews of manuscripts, I can personally testify that your characterization of the peer review system is wildly inaccurate. The number of reviews that I have encountered in which "catchy labels and hypes" are a significant factor is a tiny, tiny fraction of the number of reviews I've read. I don't know about your personal experience, but you should not make such characterizations without evidence (and anecdotal evidence, as we all know, doesn't count).

    I am FAR more concerned about the number of people who decline requests to perform reviews every time they're asked. Reviewing is part of our job - if you ALWAYS say 'no', you are shirking your responsibilities.

    ReplyDelete
  3. Amused onlooker11:58 AM

    DanM:

    As someone who reads a LOT of reviews of manuscripts, I can personally testify that your characterization of the peer review system is wildly inaccurate ...

    ... and anecdotal evidence, as we all know, doesn't count

    Fascinating. Please, do go on.

    ReplyDelete
  4. Keep in mind that impact factors apply to journals, not individual papers. As researchers increasingly discover papers through search engines, a paper's downloads and citations will count for more than a journal's impact factor.

    That trend may already have started. See my blog post about a bibliometric study that correlated citations with impact factor.

    ReplyDelete
  5. ya caught that, did ya? And people say I don't have a sense of humor...

    ReplyDelete
  6. Anonymous2:56 PM

    A certain journal I am intimately involved with has a much ballyhooed and rising impact factor. Why do the editors care? The widely known reason is bragging rights, but a lesser known reason is that it gets more resources from the publisher.

    Also, we checked to see if our citations were dominated by only a few top papers, and the good news is they weren't. Among about 1000 papers per year, the top few papers accounted for 1% or so, and it went down smoothly from there.

    Of course none of this is secret - it can be easily checked in ISI. If there is a journal you think is playing these games, don't publish there.

    ReplyDelete
  7. Anonymous4:45 PM

    Sorry DanM,

    I am not talking about the thousands and thousands of papers which are merely submitted to science journals because of the "publish or perish" pressure a lot of scientist is exposed to. Most of the authors and reviewers do what they have to do and what they can do. Declining requests to perform reviews is simply a consequence of repletion.

    The damage to science results from the exaggerated scientific breakthroughs which sometimes find their way into high-ranked journals; and immediately into the public. Maybe, you remember the "Schön Scandal". It's the collapse of such scientific breakthroughs which does severe harm to the standing of science and such a damage can never be redressed completely. Thus, science should avoid any yellow press attitude. It’s the responsibility of editors and reviewers to keep an eye on that.

    ReplyDelete
  8. Cell Reports Medicine (CRM) is a scholarly journal dedicated to publishing research in the field of Biochemistry Genetics and Molecular Biology. Cell Press is the publisher of this esteemed journal. and its abbreviated form is Cell Rep Med.
    Cell Reports Medicine Impact Factor

    ReplyDelete