I don't understand! Evaluation methods for natural language explanations

Miruna Clinciu, Arash Eshghi, Helen Hastie

Research output: Contribution to journalConference articlepeer-review

42 Downloads (Pure)

Abstract

Explainability of intelligent systems is key for future adoption. While much work is ongoing with regards to developing methods of explaining complex opaque systems, there is little current work on evaluating how effective these explanations are, in particular with respect to the user’s understanding. Natural language (NL) explanations can be seen as an intuitive channel between humans and artificial intelligence systems, in particular for enhancing transparency. This paper presents existing work on how evaluation methods from the field of Natural Language Generation (NLG) can be mapped onto NL explanations. Also, we present a preliminary investigation into the relationship between linguistic features and human evaluation, using a dataset of NL explanations derived from Bayesian Networks.

Original languageEnglish
Pages (from-to)17-24
Number of pages8
JournalCEUR Workshop Proceedings
Volume2894
Publication statusPublished - 2 Jul 2021
EventSICSA Workshop on eXplainable Artificial Intelligence 2021 - Aberdeen, United Kingdom
Duration: 1 Jun 20211 Jun 2021

Keywords

  • Evaluation
  • Explanations
  • Natural language

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'I don't understand! Evaluation methods for natural language explanations'. Together they form a unique fingerprint.

Cite this