BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations

Xingyu Zhao, Wei Huang, Xiaowei Huang, Valentin Robu, David Flynn

Research output: Contribution to conferencePaperpeer-review

4 Citations (SciVal)


Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research. In this paper, we develop a novel Bayesian extension to the LIME framework, one of the most widely used approaches in XAI - which we call BayLIME. Compared to LIME, BayLIME exploits prior knowledge and Bayesian reasoning to improve both the consistency in repeated explanations of a single prediction and the robustness to kernel settings. BayLIME also exhibits better explanation fidelity than the state-of-the-art (LIME, SHAP and GradCAM) by its ability to integrate prior knowledge from, e.g., a variety of other XAI techniques, as well as verification and validation (V&V) methods. We demonstrate the desirable properties of BayLIME through both theoretical analysis and extensive experiments.
Original languageEnglish
Number of pages10
Publication statusPublished - 2021
Event37th Conference on Uncertainty in Artificial Intelligence 2021 - virtual, Australia
Duration: 27 Jul 202130 Jul 2021


Conference37th Conference on Uncertainty in Artificial Intelligence 2021
Abbreviated titleUAI 2021
Internet address


  • Explainable AI
  • Bayesian
  • artificial intelligence
  • prediction


Dive into the research topics of 'BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations'. Together they form a unique fingerprint.

Cite this