Using Impact Factor is the lazy way out

November 10th, 2010

I help moderate our graduate programs’s weekly journal club, and some of the faculty involved are proposing strict guidelines on which journals can be used. Specifically, they’d like to restrict it to only journals that have an Impact Factor of ten or higher. I think that’s a horrible idea, so I responded thusly:

I oppose excluding papers based on impact factor, because it’s a seriously flawed metric. As an example, the journal “Acta Crystallographica A” has a current impact factor of 49.93.[1] For 72 articles published in 2008, 71 garnered no more than three citations, while a single article that racked up 5,624 citations skewed the metric.

On the other hand, a new breed of journals (like PLoS One) publish a huge number of papers and rely on post-publication statistics to measure impact. This inflates the denominator of the metric and leads to a low IF, even though there are certainly some fine papers in that journal.

As further evidence, a 2009 paper did Prinicpal Component Analysis of 39 different metrics of scholarly impact.[2] They concluded that Impact Factor is positioned at the periphery of these rankings, which should lead us to carefully consider how much we rely upon it.

In short, I don’t think we as moderators should take the lazy way out and allow only so-called “top-tier” journals. Surely, out of all of us, at least one or two can spare 5 minutes each week to skim the article and judge it on its merits.


Tags: | Comments Off

Comments are closed.