Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models

Francisco Javier Chiyah Garcia, David A. Robb, Xingkun Liu, Atanas Laskov, Pedro Patron, Helen Hastie

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

As unmanned vehicles become more autonomous, it is important to maintain a high level of transparency regarding their behaviour and how they operate. This is particularly important in remote locations where they cannot be directly observed. Here, we describe a method for generating explanations in natural language of autonomous system behaviour and reasoning. Our method involves deriving an interpretable model of autonomy through having an expert ‘speak aloud’ and providing various levels of detail based on this model. Through an online evaluation study with operators, we show it is best to generate explanations with multiple possible reasons but tersely worded. This work has implications for designing interfaces for autonomy as well as for explainable AI and operator training.
Original languageEnglish
Title of host publicationProceedings of the 11th International Conference on Natural Language Generation
PublisherAssociation for Computational Linguistics
Pages99-108
Number of pages10
ISBN (Electronic)9781948087865
Publication statusPublished - Nov 2018
Event11th International Conference of Natural Language Generation 2018 - Tilburg University, Tilburg, Netherlands
Duration: 5 Nov 20168 Nov 2018
https://inlg2018.uvt.nl/

Conference

Conference11th International Conference of Natural Language Generation 2018
Abbreviated titleINLG'18
CountryNetherlands
CityTilburg
Period5/11/168/11/18
Internet address

Fingerprint

Unmanned vehicles
Transparency

Cite this

Garcia, F. J. C., Robb, D. A., Liu, X., Laskov, A., Patron, P., & Hastie, H. (2018). Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models. In Proceedings of the 11th International Conference on Natural Language Generation (pp. 99-108). Association for Computational Linguistics.
Garcia, Francisco Javier Chiyah ; Robb, David A. ; Liu, Xingkun ; Laskov, Atanas ; Patron, Pedro ; Hastie, Helen. / Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models. Proceedings of the 11th International Conference on Natural Language Generation. Association for Computational Linguistics, 2018. pp. 99-108
@inproceedings{724391c106d3475ea4bb8a594a90d031,
title = "Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models",
abstract = "As unmanned vehicles become more autonomous, it is important to maintain a high level of transparency regarding their behaviour and how they operate. This is particularly important in remote locations where they cannot be directly observed. Here, we describe a method for generating explanations in natural language of autonomous system behaviour and reasoning. Our method involves deriving an interpretable model of autonomy through having an expert ‘speak aloud’ and providing various levels of detail based on this model. Through an online evaluation study with operators, we show it is best to generate explanations with multiple possible reasons but tersely worded. This work has implications for designing interfaces for autonomy as well as for explainable AI and operator training.",
author = "Garcia, {Francisco Javier Chiyah} and Robb, {David A.} and Xingkun Liu and Atanas Laskov and Pedro Patron and Helen Hastie",
year = "2018",
month = "11",
language = "English",
pages = "99--108",
booktitle = "Proceedings of the 11th International Conference on Natural Language Generation",
publisher = "Association for Computational Linguistics",

}

Garcia, FJC, Robb, DA, Liu, X, Laskov, A, Patron, P & Hastie, H 2018, Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models. in Proceedings of the 11th International Conference on Natural Language Generation. Association for Computational Linguistics, pp. 99-108, 11th International Conference of Natural Language Generation 2018, Tilburg, Netherlands, 5/11/16.

Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models. / Garcia, Francisco Javier Chiyah; Robb, David A.; Liu, Xingkun; Laskov, Atanas ; Patron, Pedro; Hastie, Helen.

Proceedings of the 11th International Conference on Natural Language Generation. Association for Computational Linguistics, 2018. p. 99-108.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models

AU - Garcia, Francisco Javier Chiyah

AU - Robb, David A.

AU - Liu, Xingkun

AU - Laskov, Atanas

AU - Patron, Pedro

AU - Hastie, Helen

PY - 2018/11

Y1 - 2018/11

N2 - As unmanned vehicles become more autonomous, it is important to maintain a high level of transparency regarding their behaviour and how they operate. This is particularly important in remote locations where they cannot be directly observed. Here, we describe a method for generating explanations in natural language of autonomous system behaviour and reasoning. Our method involves deriving an interpretable model of autonomy through having an expert ‘speak aloud’ and providing various levels of detail based on this model. Through an online evaluation study with operators, we show it is best to generate explanations with multiple possible reasons but tersely worded. This work has implications for designing interfaces for autonomy as well as for explainable AI and operator training.

AB - As unmanned vehicles become more autonomous, it is important to maintain a high level of transparency regarding their behaviour and how they operate. This is particularly important in remote locations where they cannot be directly observed. Here, we describe a method for generating explanations in natural language of autonomous system behaviour and reasoning. Our method involves deriving an interpretable model of autonomy through having an expert ‘speak aloud’ and providing various levels of detail based on this model. Through an online evaluation study with operators, we show it is best to generate explanations with multiple possible reasons but tersely worded. This work has implications for designing interfaces for autonomy as well as for explainable AI and operator training.

M3 - Conference contribution

SP - 99

EP - 108

BT - Proceedings of the 11th International Conference on Natural Language Generation

PB - Association for Computational Linguistics

ER -

Garcia FJC, Robb DA, Liu X, Laskov A, Patron P, Hastie H. Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models. In Proceedings of the 11th International Conference on Natural Language Generation. Association for Computational Linguistics. 2018. p. 99-108