Skip to main navigation Skip to search Skip to main content

Investigating the replicability of the social and behavioural sciences

  • Andrew H. Tyner
  • , Anna Lou Abatayo
  • , Mason Daley
  • , Samuel Field
  • , Nicholas Fox
  • , Noah A. Haber
  • , Krystal M. Hahn
  • , Melissa Kline Struhl
  • , Brinna Mawhinney
  • , Olivia Miske
  • , Priya Silverstein
  • , Courtney K. Soderberg
  • , Theresa Stankov
  • , Ahmed Abbasi
  • , Christopher L. Aberson
  • , Balazs Aczel
  • , Matúš Adamkovič
  • , Nihan Albayrak-Aydemir
  • , Peter J. Allen
  • , Michael R. Andreychik
  • Eli Awtrey, Erick Axxe, Flávio Azevedo, Miles D. Bader, Bence Bago, James Bailey, Marjan Bakker, Gabriel Banik, George C. Banks, Ernest Baskin, Anatolia Batruch, Annika Beatteay, Sophie M. Behr, Nicholas Berente, Zachariah Berry, Jędrzej Białkowski, Bojana Bodroža, Laura Boeschoten, Miklos Bognar, Christian Bokhove, Diane Bonfiglio, Robin Bouwman, Timothy F. Brady, Scott R. Braithwaite, Gabriel Briceño Jiménez, Cameron Brick, Traci Bricka, Roman Briker, Annette N. Brown, Gordon D. A. Brown, Robbie C. M. van Aert, Kathryn Caldwell, Sara Captain, Tabaré Capitán, Jesse Chandler, Tessa Charles, Christopher R. Chartier, Rahul Chawdhary, Kent Jason Cheng, William J. Chopik, Kelly Wolfe, Brian A. Nosek*, Timothy M. Errington
*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Pursuing replicability - independent evidence for previous claims - is important for creating generalizable knowledge 1,2. Here we attempted replications of 274 claims of positive results from 164 quantitative papers published from 2009 to 2018 in 54 journals in the social and behavioural sciences. Replications were high powered on average to detect the original effect size (median of 99.6%), used original materials when relevant and available, and were peer reviewed in advance through a standardized internal protocol. Replications showed statistically significant results in the original pattern for 151 of 274 claims (55.1% (95% confidence interval (CI) 49.2-60.9%)) and for 80.8 of 164 papers (49.3% (95% CI 43.8-54.7%)), weighed for replicating multiple claims per paper. We observed modest variation in replication rates across disciplines (42.5-63.1%), although some estimates had high uncertainty. The median Pearson's r effect size was 0.25 (95% CI 0.21-0.27) for original studies and 0.10 (95% CI 0.09-0.13) for replication studies, an 82.4% (95% CI 67.8-88.2%) reduction in shared variance. Thirteen methods for evaluating replication success provided estimates ranging from 28.6% to 74.8% (median of 49.3%). Some decline in effect size and significance is expected based on power to detect original effects and regression to the mean because we replicated only positive results. We observe that challenges for replicability extend across social-behavioural sciences, illustrating the importance of identifying conditions that promote or inhibit replicability 3,4.

Original languageEnglish
Pages (from-to)143-150
Number of pages8
JournalNature
Volume652
Issue number8108
Early online date1 Apr 2026
DOIs
Publication statusPublished - 2 Apr 2026

Keywords

  • Social Sciences
  • Behavioral Sciences
  • Reproducibility
  • artificial intelligence (AI)
  • Uncertainty
  • Open Science
  • Replication

Fingerprint

Dive into the research topics of 'Investigating the replicability of the social and behavioural sciences'. Together they form a unique fingerprint.

Cite this