An entire sub-field of psychology - social priming - has largely failed to replicate. How does science respond to this kind of shock? Does it ignore the bad news and pursue business as usual? Does it attempt to rebuild foundational work at higher rigor? Or does it abandon the field? In short: how well does science self-correct?
This story from Nature gives anecdotal evidence that research using social priming has declined rapidly, following widespread failure to replicate. Unfortunately, there does not appear to be any peer-reviewed study about the fallout from the replication crisis (at least that I can find).
However, a number of papers have instead looked at how science responds to a different information shock: retraction. These papers all take the approach of matching papers “tainted” by retraction with control papers, and then comparing the citations received by tainted and control papers.
For example, Furman, Jensen, and Murray (2012) compare retracted biomedical papers to (un-retracted) controls published immediately before and after them in the same journal. The retraction penalty relative to neighbor articles is swift and severe.
Within 1 year, retracted articles are receiving 45% of the citations of the controls, a number that steadily falls to 20%.
Note though, that retracted articles still receive some citations. To see what’s going on, FJM delve into the post-retraction citations received by 20 retracted papers. The majority of these citations either seem aware the paper was retracted, or do not cite it to build on it’s findings (instead, for example, citing merely to argue the topic is interesting). So it looks like scientists do shun retracted work.
Lu, Bin, Uzzi and Jones (2013) verify these results and go further. They match papers in the Web of Science to those in the same field with similar pre-retraction citation trajectories. They also document a sharp citation penalty for retracted articles relative to controls:
But LBUJ also look at the impact of retraction on an author’s work published prior to the retraction event and not retracted themselves. They use the same approach to identify control articles with similar citation trajectories prior to the retraction event. There’s no detectable effect on an author’s other work if they were the ones who reported the problem that led to retraction (~22% of retractions). It seems scientists give people the benefit of the doubt in that case.
But if someone else discovered and reported the problem, there’s a 10% citation penalty for the author’s un-retracted work after a few years. Retraction breeds suspicion.
The same team returned to this question in 2019, this time focusing on how “blame” is allocated among authors, when a retracted paper has multiple co-authors. Jin, Jones, Lu, and Uzzi (2019) look at the citation penalty suffered by different members of the team tainted by retraction. Again, they’re looking at the impact of retraction on authors’ un-retracted work.
It looks like the author with less reputation gets the blame for retraction events. JJLU define the “eminence” or authors variously by the number of publications, citations, and h-index score (all computed for the year prior to retraction). In the above figure, authors in the top 10% of eminence measure don’t really see any citation impact on their other work, while those in the bottom 90% experience significantly fewer citations to their other work.
What about work merely in the same field as retracted articles?
Azoulay, Furman, Krieger, and Murray (2015) identify PubMed articles similar to retracted ones by the overlap of MeSH keywords, but which share no common coauthors. These are papers on similar topics as retracted papers, but for which there is no other (observable) reason to be suspicious of them. They compare the citations of these articles to controls published immediately before and after in the same journal.
They find that papers similar to the retracted article also suffer significant citation penalties! The penalty is strongest when the retraction calls into question the validity of the retracted articles findings (not, for example, when it’s basically right but was published without permission from the data vendor).
So scientists are pretty responsive to news that research is flawed. They rapidly stop citing retracted work and they look more skeptically at work by the same people, especially when they have less reason to trust the author (either because they didn’t self-report or they have a less prestigious track record). They even exercise more caution in citing work in similar fields.
So. What should we expect the response the replication crisis will be? The strong and consistent evidence from retraction suggests research will move away from fields that fail to replicate. That said, there are some important differences between retractions and replications.
Retractions tend to occur rapidly or not at all (FJM find most retractions happen within 2 years of publication), whereas failure to replicate has come much later, possibly after an active research program has emerged.
Whereas retractions are rare events (1.4 in 10,000 papers for biology and medicine, less for other fields according to LBUJ), failure to replicate in many fields is disturbingly high.
Retractions are relatively unambiguous, even though they can come in many flavors. In contrast, a debate is still ongoing about what exactly constitutes a failure to replicate and if that’s even a meaningful term.
All in all, I am expecting to see a response to the replication crisis - though maybe not as swift and strong as in retraction.
Post-script
Thanks for reading! If you like this, you can help to improve this newsletter by sending me interesting papers on the economics of innovation, especially stuff you think isn’t well known.