Perspectives on Psychological Science, Ahead of Print.
Multisite (multilab/many-lab) replications have emerged as a popular way of verifying prior research findings, but their record in social psychology has prompted distrust of the field and a sense of crisis. We review all 36 multisite social-psychology replications (plus three articles reporting multiple ministudies). We start by assuming that both the original and the multisite replications were conducted in honest and diligent fashion, despite often yielding different conclusions. Four of the 36 (11%) were clearly successful in terms of providing significant support for the original hypothesis, and five others (14%) had mixed results. The remaining 27 (75%) were failures. Multiple explanations for the generally poor record of replications are considered, including the possibility that the original hypothesis was wrong; operational failure; low engagement of participants; and bias toward failure. The relevant evidence is assessed as well. There was evidence for each of the possibilities listed above, with low engagement emerging as a widespread problem (reflected in high rates of discarded data and weak manipulation checks). The few procedures with actual interpersonal interaction fared much better than others. We discuss implications in relation to manipulation checks, effect sizes, and impact on the field and offer recommendations for improving future multisite projects.