The many Shapley values for explainable artificial intelligence: A sensitivity analysis perspective

Emanuele Borgonovo, Elmar Plischke, Giovanni Rabitti

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)
37 Downloads (Pure)

Abstract

Predictive models are increasingly used for managerial and operational decision-making. The use of complex machine learning algorithms, the growth in computing power, and the increase in data acquisitions have amplified the black-box effects in data science. Consequently, a growing body of literature is investigating methods for interpretability and explainability. We focus on methods based on Shapley values, which are gaining attention as measures of feature importance for explaining black-box predictions. Our analysis follows a hierarchy of value functions, and proves several theoretical properties that connect the indices at the alternative levels. We bridge the notions of totally monotone games and Shapley values, and introduce new interaction indices based on the Shapley-Owen values. The hierarchy evidences synergies that emerge when combining Shapley effects computed at different levels. We then propose a novel sensitivity analysis setting that combines the benefits of both local and global Shapley explanations, which we refer to as the “glocal” approach. We illustrate our integrated approach and discuss the managerial insights it provides in the context of a data-science problem related to health insurance policy-making.
Original languageEnglish
Pages (from-to)911-926
Number of pages16
JournalEuropean Journal of Operational Research
Volume318
Issue number3
Early online date22 Jun 2024
DOIs
Publication statusPublished - 1 Nov 2024

Keywords

  • Aggregation of local importance effects
  • Analytics
  • Game theory
  • Interactions
  • Sensitivity analysis

ASJC Scopus subject areas

  • Information Systems and Management
  • General Computer Science
  • Modelling and Simulation
  • Management Science and Operations Research

Fingerprint

Dive into the research topics of 'The many Shapley values for explainable artificial intelligence: A sensitivity analysis perspective'. Together they form a unique fingerprint.

Cite this