Rethinking statistical analysis methods for CHI

Maurits Kaptein, Judy Robertson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

53 Citations (Scopus)


CHI researchers typically use a significance testing approach to statistical analysis when testing hypotheses during usability evaluations. However, the appropriateness of this approach is under increasing criticism, with statisticians, economists, and psychologists arguing against the use of routine interpretation of results using "canned" p values. Three problems with current practice - the fallacy of the transposed conditional, a neglect of power, and the reluctance to interpret the size of effects - can lead us to build weak theories based on vaguely specified hypothesis, resulting in empirical studies which produce results that are of limited practical or scientific use. Using publicly available data presented at CHI 2010 [19] as an example we address each of the three concerns and promote consideration of the magnitude and actual importance of effects, as opposed to statistical significance, as the new criteria for evaluating CHI research.
Original languageEnglish
Title of host publicationCHI '12 Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems
Place of PublicationAustin, Texas
PublisherAssociation for Computing Machinery
Number of pages10
ISBN (Print)978-1-4503-1015-4
Publication statusPublished - 2012


  • Usability evaluation
  • Bayesian Statistics
  • Research Methods


Dive into the research topics of 'Rethinking statistical analysis methods for CHI'. Together they form a unique fingerprint.

Cite this