Abstract
Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research. In this paper, we develop a novel Bayesian extension to the LIME framework, one of the most widely used approaches in XAI - which we call BayLIME. Compared to LIME, BayLIME exploits prior knowledge and Bayesian reasoning to improve both the consistency in repeated explanations of a single prediction and the robustness to kernel settings. BayLIME also exhibits better explanation fidelity than the state-of-the-art (LIME, SHAP and GradCAM) by its ability to integrate prior knowledge from, e.g., a variety of other XAI techniques, as well as verification and validation (V&V) methods. We demonstrate the desirable properties of BayLIME through both theoretical analysis and extensive experiments.
Original language | English |
---|---|
Pages | 887-896 |
Number of pages | 10 |
Publication status | Published - 2021 |
Event | 37th Conference on Uncertainty in Artificial Intelligence 2021 - virtual, Australia Duration: 27 Jul 2021 → 30 Jul 2021 https://auai.org/uai2021/ |
Conference
Conference | 37th Conference on Uncertainty in Artificial Intelligence 2021 |
---|---|
Abbreviated title | UAI 2021 |
Country/Territory | Australia |
Period | 27/07/21 → 30/07/21 |
Internet address |
Keywords
- Explainable AI
- Bayesian
- artificial intelligence
- prediction