How best to quantify replication success? A simulation study on the comparison of replication success metrics

Jasmine Muradchanian*, Rink Hoekstra, Henk Kiers, Don van Ravenzwaaij

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

13 Citations (Scopus)
168 Downloads (Pure)

Abstract

To overcome the frequently debated crisis of confidence, replicating studies is becoming increasingly more common. Multiple frequentist and Bayesian measures have been proposed to evaluate whether a replication is successful, but little is known about which method best captures replication success. This study is one of the first attempts to compare a number of quantitative measures of replication success with respect to their ability to draw the correct inference when the underlying truth is known, while taking publication bias into account. Our results show that Bayesian metrics seem to slightly outperform frequentist metrics across the board. Generally, meta-analytic approaches seem to slightly outperform metrics that evaluate single studies, except in the scenario of extreme publication bias, where this pattern reverses.

Original languageEnglish
Article number201697
Number of pages16
JournalRoyal Society Open Science
Volume8
Issue number5
DOIs
Publication statusPublished - 2021

Fingerprint

Dive into the research topics of 'How best to quantify replication success? A simulation study on the comparison of replication success metrics'. Together they form a unique fingerprint.

Cite this