Examining the generalizability of research findings from archival data

Andrew Delios*, Elena Giulia Clemente, Tao Wu, Hongbin Tan, Yong Wang, Michael Gordon, Domenico Viganola, Zhaowei Chen, Anna Dreber, Magnus Johannesson, Thomas Pfeiffer, Generalizability Tests Forecasting Collaboration, Jais Adam-Troian, Eric Luis Uhlmann*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

33 Citations (Scopus)
14 Downloads (Pure)

Abstract

The extent to which results from complex datasets generalize across contexts is critically important to numerous scientific fields as well as to practitioners who rely on such analyses to guide important strategic decisions. Our initiative systematically investigated whether findings from the field of strategic management would emerge in new time periods and new geographies. Original findings that were statistically reliable in the first place were typically obtained again in novel tests, suggesting surprisingly little sensitivity to context. For some social scientific areas of inquiry, results from a specific time and place can be a meaningful guide as to what will be observed more generally.
Original languageEnglish
Article numbere2120377119
JournalProceedings of the National Academy of Sciences of the United States of America
Volume119
Issue number30
DOIs
Publication statusPublished - 19 Jul 2022

Keywords

  • Research reliability
  • generalizability
  • archival data
  • reproducibility
  • context sensitivity

Fingerprint

Dive into the research topics of 'Examining the generalizability of research findings from archival data'. Together they form a unique fingerprint.

Cite this