Issues affecting user confidence in explanation systems

David A. Robb, Stefano Padilla*, Thomas S. Methven, Yibo Liang, Pierre Le Bras, Tanya Howden, Azimeh Gharavi, Mike John Chantler, Ioannis Chalkiadakis

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

51 Downloads (Pure)

Abstract

Recent successes of artificial intelligence, machine learning, and deep learning have generated exciting challenges in the area of explainability. For societal, regulatory, and utility reasons, systems that exploit these technologies are increasingly being required to explain their outputs to users. In addition, appropriate and timely explanation can improve user experience, performance, and confidence. We have found that users are reluctant to use such systems if they lack the understanding and confidence to explain the underlying processes and reasoning behind the results. In this paper, we present a preliminary study by nine experts that identified research issues concerning explanation and user confidence. We used a three-session collaborative process to collect, aggregate, and generate joint reflections from the group. Using this process, we identified six areas of interest that we hope will serve as a catalyst for stimulating discussion.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2151
Publication statusPublished - 30 Jul 2018
Event1st SICSA Workshop on Reasoning, Learning and Explainability 2018 - Aberdeen, United Kingdom
Duration: 27 Jun 2018 → …

Keywords

  • AI
  • Confidence
  • Decision Making
  • Explanations
  • Systems

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Issues affecting user confidence in explanation systems'. Together they form a unique fingerprint.

Cite this