Abstract
We present a study that explores the formulation of natural language explanations for managing the appropriate amount of trust in a remote autonomous system that fails to complete its mission. Online crowd-sourced participants were shown video vignettes of robots performing an inspection task. We measured participants' mental models, their confidence in their understanding of the robot behaviour and their trust in the robot. We found that including history in the explanation increases trust and confidence, and helps maintain an accurate mental model, but only if context is also included. In addition, our study exposes that some explanation formulations lacking in context can lead to misplaced participant confidence.
Original language | English |
---|---|
Title of host publication | Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023) |
Number of pages | 3 |
Publication status | Accepted/In press - 17 Feb 2023 |
Event | 22nd International Conference on Autonomous Agents and Multiagent Systems 2023 - London ExCeL conference centre, London, United Kingdom Duration: 29 May 2023 → 2 Jun 2023 https://aamas2023.soton.ac.uk/ |
Conference
Conference | 22nd International Conference on Autonomous Agents and Multiagent Systems 2023 |
---|---|
Abbreviated title | AAMAS 2023 |
Country/Territory | United Kingdom |
City | London |
Period | 29/05/23 → 2/06/23 |
Internet address |
Keywords
- Explanations
- transparency
- trust
- robot faults
- mental models