Issues affecting user confidence in explanation systems

David A. Robb, Stefano Padilla, Thomas S. Methven, Yibo Liang, Pierre Le Bras, Tanya Howden, Azimeh Gharavi, Mike John Chantler, Ioannis Chalkiadakis

Research output: Contribution to journalConference article

14 Downloads (Pure)

Abstract

Recent successes of artificial intelligence, machine learning, and deep learning have generated exciting challenges in the area of explainability. For societal, regulatory, and utility reasons, systems that exploit these technologies are increasingly being required to explain their outputs to users. In addition, appropriate and timely explanation can improve user experience, performance, and confidence. We have found that users are reluctant to use such systems if they lack the understanding and confidence to explain the underlying processes and reasoning behind the results. In this paper, we present a preliminary study by nine experts that identified research issues concerning explanation and user confidence. We used a three-session collaborative process to collect, aggregate, and generate joint reflections from the group. Using this process, we identified six areas of interest that we hope will serve as a catalyst for stimulating discussion.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2151
Publication statusPublished - 30 Jul 2018
Event1st SICSA Workshop on Reasoning, Learning and Explainability 2018 - Aberdeen, United Kingdom
Duration: 27 Jun 2018 → …

    Fingerprint

Keywords

  • AI
  • Confidence
  • Decision Making
  • Explanations
  • Systems

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Robb, D. A., Padilla, S., Methven, T. S., Liang, Y., Le Bras, P., Howden, T., ... Chalkiadakis, I. (2018). Issues affecting user confidence in explanation systems. CEUR Workshop Proceedings, 2151.