Explainable Artificial Intelligence in Healthcare: Opportunities, Gaps and Challenges and a Novel Way to Look at the Problem Space

Petra Korica, Neamat Elgayar, Wei Pang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Citations (Scopus)
710 Downloads (Pure)

Abstract

Explainable Artificial Intelligence (XAI) is rapidly becoming an emerging and fast-growing research field; however, its adoption in healthcare is still at the early stage despite the potential that XAI can bring to the application of AI in this industry. Many challenges remain to be solved, including setting standards for explanations, the degree of interaction between different stakeholders and the models, the implementation of quality and performance metrics, the agreement on standards for safety and accountability, its integration into clinical workflows, and IT infrastructure. This paper has two objectives. The first one is to present summarized outcomes of a literature survey and highlight the state-of-the-art for explainability including gaps, challenges, and opportunities for XAI in healthcare industry. For easier comprehension and onboarding to this research field we suggest a synthesized taxonomy for categorizing explainability methods. The second objective is to ask the question if applying a novel way of looking at explainability problem space, through a specific problem/domain lens, and automating that approach in an AutoML similar fashion, would help mitigate the challenges mentioned above. In the literature there is a tendency to look at the explainability of AI from model-first lens, which puts concrete problems and domains aside. For example, the explainability of a patient's survival model is treated the same as explaining a hospital cost procedure calculation. With a well-identified problem/domain that XAI should be applied to, the scope is clear and well-defined, enabling us to (semi-) automatically find suitable models, optimize their parameters and their explanations, metrics, stakeholders, safety/accountability level, and suggest means of their integration into clinical workflow.

Original languageEnglish
Title of host publicationIntelligent Data Engineering and Automated Learning – IDEAL 2021
EditorsDavid Camacho, Peter Tino, Richard Allmendinger, Hujun Yin, Antonio J. Tallón-Ballesteros, Ke Tang, Sung-Bae Cho, Paulo Novais, Susana Nascimento
PublisherSpringer
Pages333-342
Number of pages10
ISBN (Electronic)9783030916084
ISBN (Print)9783030916077
DOIs
Publication statusPublished - 23 Nov 2021
Event22nd International Conference on Intelligent Data Engineering and Automated Learning 2021 - Manchester, United Kingdom
Duration: 25 Nov 202127 Nov 2021

Publication series

NameLecture Notes in Computer Science
Volume13113
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference22nd International Conference on Intelligent Data Engineering and Automated Learning 2021
Abbreviated titleIDEAL 2021
Country/TerritoryUnited Kingdom
CityManchester
Period25/11/2127/11/21

Keywords

  • AI in Healthcare
  • Artificial intelligence
  • Explainability
  • Explainable AI
  • Interpretability
  • Machine learning
  • XAI

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Explainable Artificial Intelligence in Healthcare: Opportunities, Gaps and Challenges and a Novel Way to Look at the Problem Space'. Together they form a unique fingerprint.

Cite this