Issues affecting user confidence in explanation systems

David A. Robb, Stefano Padilla, Thomas S. Methven, Yibo Liang, Pierre Le Bras, Tanya Howden, Azimeh Gharavi, Mike John Chantler, Ioannis Chalkiadakis

Research output: Contribution to journalConference article

Abstract

Recent successes of artificial intelligence, machine learning, and deep learning have generated exciting challenges in the area of explainability. For societal, regulatory, and utility reasons, systems that exploit these technologies are increasingly being required to explain their outputs to users. In addition, appropriate and timely explanation can improve user experience, performance, and confidence. We have found that users are reluctant to use such systems if they lack the understanding and confidence to explain the underlying processes and reasoning behind the results. In this paper, we present a preliminary study by nine experts that identified research issues concerning explanation and user confidence. We used a three-session collaborative process to collect, aggregate, and generate joint reflections from the group. Using this process, we identified six areas of interest that we hope will serve as a catalyst for stimulating discussion.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2151
Publication statusPublished - 30 Jul 2018
Event1st SICSA Workshop on Reasoning, Learning and Explainability 2018 - Aberdeen, United Kingdom
Duration: 27 Jun 2018 → …

Fingerprint

Artificial intelligence
Learning systems
Catalysts
Deep learning

Keywords

  • AI
  • Confidence
  • Decision Making
  • Explanations
  • Systems

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Robb, D. A., Padilla, S., Methven, T. S., Liang, Y., Le Bras, P., Howden, T., ... Chalkiadakis, I. (2018). Issues affecting user confidence in explanation systems. CEUR Workshop Proceedings, 2151.
Robb, David A. ; Padilla, Stefano ; Methven, Thomas S. ; Liang, Yibo ; Le Bras, Pierre ; Howden, Tanya ; Gharavi, Azimeh ; Chantler, Mike John ; Chalkiadakis, Ioannis. / Issues affecting user confidence in explanation systems. In: CEUR Workshop Proceedings. 2018 ; Vol. 2151.
@article{a32e639a7e614e78b5b7c800fe27b137,
title = "Issues affecting user confidence in explanation systems",
abstract = "Recent successes of artificial intelligence, machine learning, and deep learning have generated exciting challenges in the area of explainability. For societal, regulatory, and utility reasons, systems that exploit these technologies are increasingly being required to explain their outputs to users. In addition, appropriate and timely explanation can improve user experience, performance, and confidence. We have found that users are reluctant to use such systems if they lack the understanding and confidence to explain the underlying processes and reasoning behind the results. In this paper, we present a preliminary study by nine experts that identified research issues concerning explanation and user confidence. We used a three-session collaborative process to collect, aggregate, and generate joint reflections from the group. Using this process, we identified six areas of interest that we hope will serve as a catalyst for stimulating discussion.",
keywords = "AI, Confidence, Decision Making, Explanations, Systems",
author = "Robb, {David A.} and Stefano Padilla and Methven, {Thomas S.} and Yibo Liang and {Le Bras}, Pierre and Tanya Howden and Azimeh Gharavi and Chantler, {Mike John} and Ioannis Chalkiadakis",
year = "2018",
month = "7",
day = "30",
language = "English",
volume = "2151",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",
publisher = "CEUR-WS",

}

Robb, DA, Padilla, S, Methven, TS, Liang, Y, Le Bras, P, Howden, T, Gharavi, A, Chantler, MJ & Chalkiadakis, I 2018, 'Issues affecting user confidence in explanation systems', CEUR Workshop Proceedings, vol. 2151.

Issues affecting user confidence in explanation systems. / Robb, David A.; Padilla, Stefano; Methven, Thomas S.; Liang, Yibo; Le Bras, Pierre; Howden, Tanya; Gharavi, Azimeh; Chantler, Mike John; Chalkiadakis, Ioannis.

In: CEUR Workshop Proceedings, Vol. 2151, 30.07.2018.

Research output: Contribution to journalConference article

TY - JOUR

T1 - Issues affecting user confidence in explanation systems

AU - Robb, David A.

AU - Padilla, Stefano

AU - Methven, Thomas S.

AU - Liang, Yibo

AU - Le Bras, Pierre

AU - Howden, Tanya

AU - Gharavi, Azimeh

AU - Chantler, Mike John

AU - Chalkiadakis, Ioannis

PY - 2018/7/30

Y1 - 2018/7/30

N2 - Recent successes of artificial intelligence, machine learning, and deep learning have generated exciting challenges in the area of explainability. For societal, regulatory, and utility reasons, systems that exploit these technologies are increasingly being required to explain their outputs to users. In addition, appropriate and timely explanation can improve user experience, performance, and confidence. We have found that users are reluctant to use such systems if they lack the understanding and confidence to explain the underlying processes and reasoning behind the results. In this paper, we present a preliminary study by nine experts that identified research issues concerning explanation and user confidence. We used a three-session collaborative process to collect, aggregate, and generate joint reflections from the group. Using this process, we identified six areas of interest that we hope will serve as a catalyst for stimulating discussion.

AB - Recent successes of artificial intelligence, machine learning, and deep learning have generated exciting challenges in the area of explainability. For societal, regulatory, and utility reasons, systems that exploit these technologies are increasingly being required to explain their outputs to users. In addition, appropriate and timely explanation can improve user experience, performance, and confidence. We have found that users are reluctant to use such systems if they lack the understanding and confidence to explain the underlying processes and reasoning behind the results. In this paper, we present a preliminary study by nine experts that identified research issues concerning explanation and user confidence. We used a three-session collaborative process to collect, aggregate, and generate joint reflections from the group. Using this process, we identified six areas of interest that we hope will serve as a catalyst for stimulating discussion.

KW - AI

KW - Confidence

KW - Decision Making

KW - Explanations

KW - Systems

UR - http://www.scopus.com/inward/record.url?scp=85054937032&partnerID=8YFLogxK

M3 - Conference article

VL - 2151

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -