Rethinking statistical analysis methods for CHI

Maurits Kaptein, Judy Robertson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

35 Citations (Scopus)

Abstract

CHI researchers typically use a significance testing approach to statistical analysis when testing hypotheses during usability evaluations. However, the appropriateness of this approach is under increasing criticism, with statisticians, economists, and psychologists arguing against the use of routine interpretation of results using "canned" p values. Three problems with current practice - the fallacy of the transposed conditional, a neglect of power, and the reluctance to interpret the size of effects - can lead us to build weak theories based on vaguely specified hypothesis, resulting in empirical studies which produce results that are of limited practical or scientific use. Using publicly available data presented at CHI 2010 [19] as an example we address each of the three concerns and promote consideration of the magnitude and actual importance of effects, as opposed to statistical significance, as the new criteria for evaluating CHI research.
Original languageEnglish
Title of host publicationCHI '12 Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems
Place of PublicationAustin, Texas
PublisherAssociation for Computing Machinery
Pages1105-1114
Number of pages10
ISBN (Print)978-1-4503-1015-4
DOIs
Publication statusPublished - 2012

Keywords

  • Usability evaluation
  • Bayesian Statistics
  • Research Methods

Fingerprint Dive into the research topics of 'Rethinking statistical analysis methods for CHI'. Together they form a unique fingerprint.

  • Cite this

    Kaptein, M., & Robertson, J. (2012). Rethinking statistical analysis methods for CHI. In CHI '12 Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (pp. 1105-1114). Association for Computing Machinery. https://doi.org/10.1145/2207676.2208557