Here's a quick question, for what shape distribution does the mean convey the least useful information? Well, there are many answers, but a prime candidate is a one-sided long-tail distribution of the type exhibited by journal citations. The mean, standard deviation and Pearson correlation, are all summary statistics developed for the two-sided normal distribution (exercise for the reader: what are their non-parametric equivalents?). Applying them to anything else is like putting lipstick on a pig (ok, a poor analogy, but it sounds funny :-), but this porcine paintjob is exactly the method used to calculate a journal's Impact Factor.

So, what's the problem? In the context of a one-sided long-tail distribution, the mean is highly sensitive to outliers, and thus almost useless. Let's take an example. Let's suppose there were 99 papers published and each was cited once giving an Impact Factor of 1.0 (99*1/99). Now let's suppose a single additional paper was published which garnered 100 citations. The Impact Factor of the journal is now (99*1 + 1*100)/100 = 2.0. So a single paper, an outlier, has doubled the Impact Factor.

But that wouldn't happen in practice, right? No - you don't get it. The distribution of journal citations has a shape that guarantees this to happen; all those impact factors you read are just measures of outliers. How about instead "the Outlier Factor", or better still the "Extreme Value Factor"?

Still don't believe me? Well, let's take a concrete example. Thomson ISI has just deigned to give J. Cheminf. its first impact factor with a value of 3.42. Let's say that 65 papers have been taken into account, so that's about 222 citations in total. Now let's enter an outlier into the mix, say the Open Babel paper published in Oct of last year. I would expect about 30 to 60 citations a year once it gets going (based on prior citations of the software, as well as experience with the GaussSum paper) - let's just say 50 for a round number, so 100 citations in the 2-year period included in an impact factor. This means that all else being equal, in one year's time the journal's Impact Factor will rise to 4.1, and in two years to 4.9.

I just hope those Avogadro guys don't publish another outlier. :-)

## 8 comments:

Thank you for your post. I guess that the most striking example is the one of Acta cristallographica A, who jumped from 1.5 to ~50 in 2009, just for a single paper.

One could get a bit more rigorous on this by asking about the observed variability of impact factors from year to year...?

@gioby: What a great example! I'm tempted to rewrite the post now. Maybe I'll insert a comment to see below.

Actually, journal IFs and medians correlate: pic.twitter.com/QVMQkeZg

Some context here: http://occamstypewriter.org/scurry/2012/08/19/sick-of-impact-factors-coda/#comment-12080

Hmmmm...that's a bit strange, don't you think? In your example, the IF and the median not only correlate but have about the same value. Is my assumption about the shape of the distribution wrong then?

To clarify...did you calculate the mean yourself on the same date (apples-to-apples), or use the IF?

To clarify, the horizontal axis on the plot I linked to above corresponds to the 2012 Journal IF as given by Thomson Reuters. However, the medians on the vertical axis are of citations received in the last 5 years (note that the IF time frame is two years). I extracted the data from Thomson Reuter's Web of Science.

Clearly, a journal's IF is a good predictor of the 5-year median number of citations per paper in the journal. The median is of course a better metric than the average, as the former is much less affected by the skewed-distribution issue.

We just hit 3,000 citations on Google Scholar and over 2,000 by whatever they use on the site, so another outlier from Avogadro seems like a solid guess. Just saw this post on the altmetrics analysis.

It's an oldie but a goodie.

Post a comment