Explanation Styles for Trustworthy Autonomous Systems

David A. Robb, Xingkun Liu, Helen Hastie

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)
129 Downloads (Pure)

Abstract

We present a study that explores the formulation of natural language explanations for managing the appropriate amount of trust in a remote autonomous system that fails to complete its mission. Online crowd-sourced participants were shown video vignettes of robots performing an inspection task. We measured participants' mental models, their confidence in their understanding of the robot behaviour and their trust in the robot. We found that including history in the explanation increases trust and confidence, and helps maintain an accurate mental model, but only if context is also included. In addition, our study exposes that some explanation formulations lacking in context can lead to misplaced participant confidence.
Original languageEnglish
Title of host publicationProceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems (AAMAS '23)
PublisherAssociation for Computing Machinery
Pages2298–2300
Number of pages3
ISBN (Print)9781450394321
DOIs
Publication statusPublished - 30 May 2023
Event22nd International Conference on Autonomous Agents and Multiagent Systems 2023 - London ExCeL conference centre, London, United Kingdom
Duration: 29 May 20232 Jun 2023
https://aamas2023.soton.ac.uk/

Conference

Conference22nd International Conference on Autonomous Agents and Multiagent Systems 2023
Abbreviated titleAAMAS 2023
Country/TerritoryUnited Kingdom
CityLondon
Period29/05/232/06/23
Internet address

Keywords

  • Explanations
  • transparency
  • trust
  • robot faults
  • mental models

Fingerprint

Dive into the research topics of 'Explanation Styles for Trustworthy Autonomous Systems'. Together they form a unique fingerprint.

Cite this