In principle, successful replications should enhance the credibility of scientific findings, and failed replications should reduce credibility. Yet it

Is Psychological Science Self-Correcting? Citations Before and After Successful and Failed Replications

submited by
Style Pass
2024-04-04 00:00:08

In principle, successful replications should enhance the credibility of scientific findings, and failed replications should reduce credibility. Yet it is unknown how replication typically affects the influence of research. We analyzed the citation history of 98 articles. Each was published by a selective psychology journal in 2008 and subjected to a replication attempt published in 2015. Relative to successful replications, failed replications reduced citations of replicated studies by only 5% to 9% on average, an amount that did not differ significantly from zero. Less than 3% of articles citing the original studies cited the replication attempt. It does not appear that replication failure much reduced the influence of nonreplicated findings in psychology. To increase the influence of replications, we recommend (a) requiring authors to cite replication studies alongside the individual findings and (b) enhancing reference databases and search engines to give higher priority to replication studies.

In recent years, systematic efforts have revealed that a large number of published findings cannot be replicated. Eighty-nine percent of “landmark” findings in preclinical cancer research (Begley & Ellis, 2012), 32% of highly cited clinical trials in medicine (Ioannidis, 2005), and 60% of experiments published in top psychology journals (Open Science Collaboration, 2015) have failed to produce similar results when repeated with new samples. Some failed replication studies find no effect at all; others find an effect that, though larger than zero, is much smaller than the effect reported in the original study (Ioannidis, 2008).

Leave a Comment