A chart in The Scientist today (pictured, right) plots retractions against NIH grant success rates.
As DrugMonkey writes: ‘it appears truthy. Decreasing grant success is associated with increasing retraction rates. Makes a lot of sense. Desperate times drive the weak to desperate measures’.
Hmmm. Cut-throat competition to get grants a driver for sloppy, sensational or downright fraudulent work? Does make sense.
Problem is, it’s really hard to find evidence for this – and you definitely won’t find it in retractions. I think this is the wrong chart. Prompted by @chemjobber, I have gathered data for what I think is the correct chart (pictured, below), which paints a different picture.
Before I get to this chart, the thing to bear in mind about the recent rise in retraction notices, is that it’s likely a sign of raised awareness of levels of misconduct/sloppiness that were always there, but hidden. That is, the retraction notices are merely uncovering a little more of the tip of the iceberg. Evidence? Well, Daniele Fanelli has already ably demonstrated that some 1-2% of scientists admit misconduct in anonymous self-reported surveys. And the Journal of Cell Biology has been checking figures in submitted papers for a decade and its proportion of papers revoked because of manipulation has stayed constant at around 1% over that time. Yet retraction notices run below 0.05% of the literature even despite their ten-fold increase in recent years. Go figure. As Fanelli put it at the recent World Conference on Research Integrity in Montreal: ‘Retractions are the symptom of a growing solution – not a growing problem’.
Now about this chart. It’s a minor irritant that the number of retraction notices plotted are worldwide notices, not those from the US, yet they’re being compared to US funding success rates. What’s more problematic is that the author has plotted retraction notices, not retracted papers. Retraction notices often end up retracting work that was published many years before – say, for example, when NIH success rates were actually higher.
So I’ve replotted a chart using PubMed Reminer, to get retracted papers per year, then restricted them to just those with US addresses, or those with US govt PHS + NIH extramural funding.
Note of extreme caution: PubMed does not catch all retractions and I don’t know how good PubMed Reminer is. But, it’s as good as I was going to get on a quick trawl. Results are in this spreadsheet, and the picture is above.
I have grayed out the circles for 2011 and 2012 since the numbers are unfair: we will find many more retracted papers from that year in future retraction notices from 2013, 2014, 2015 and beyond.
You will see that there is little correlation between NIH success rates and retracted US publications or retracted papers with PHS support/NIH extramural support. There is a correlation with the total world’s retracted papers. You’ll also notice that the US share of retracted papers each year drops dramatically in recent years (but we already knew that – see these figures).
Now, as I’ve noted above, we probably won’t find out until, say, 2015, whether squeezed NIH success rates in 2011 and 2012 led to a higher number of retracted NIH-funded or US (or even world) papers. So maybe we’ll find out that it did. I’m not holding my breath.
The real conclusion: comparing retraction notices against NIH success rates isn’t much evidence for anything.