Recent successes of artificial intelligence, machine learning, and deep learning have generated exciting challenges in the area of explainability. For societal, regulatory, and utility reasons, systems that exploit these technologies are increasingly being required to explain their outputs to users. In addition, appropriate and timely explanation can improve user experience, performance, and confidence. We have found that users are reluctant to use such systems if they lack the understanding and confidence to explain the underlying processes and reasoning behind the results. In this paper, we present a preliminary study by nine experts that identified research issues concerning explanation and user confidence. We used a three-session collaborative process to collect, aggregate, and generate joint reflections from the group. Using this process, we identified six areas of interest that we hope will serve as a catalyst for stimulating discussion.
|Journal||CEUR Workshop Proceedings|
|Publication status||Published - 30 Jul 2018|
|Event||1st SICSA Workshop on Reasoning, Learning and Explainability 2018 - Aberdeen, United Kingdom|
Duration: 27 Jun 2018 → …
- Decision Making
ASJC Scopus subject areas
- Computer Science(all)