A study of automatic metrics for the evaluation of natural language explanations

Miruna-Adriana Clinciu, Arash Eshghi, Helen Hastie

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)
5 Downloads (Pure)

Abstract

As transparency becomes key for robotics and AI, it will be necessary to evaluate the methods through which transparency is provided, including automatically generated natural language (NL) explanations. Here, we explore parallels between the generation of such explanations and the much-studied field of evaluation of Natural Language Generation (NLG). Specifically, we investigate which of the NLG evaluation measures map well to explanations. We present the ExBAN corpus: a crowd-sourced corpus of NL explanations for Bayesian Networks. We run correlations comparing human subjective ratings with NLG automatic measures. We find that embedding-based automatic NLG evaluation methods, such as BERTScore and BLEURT, have a higher correlation with human ratings, compared to word-overlap metrics, such as BLEU and ROUGE. This work has implications for Explainable AI and transparent robotic and autonomous systems.

Original languageEnglish
Title of host publicationProceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics
Subtitle of host publicationMain Volume
PublisherAssociation for Computational Linguistics
Pages2376-2387
Number of pages12
ISBN (Electronic)9781954085022
Publication statusPublished - Apr 2021
Event16th Conference of the European Chapter of the Associationfor Computational Linguistics 2021 - Virtual, Online
Duration: 19 Apr 202123 Apr 2021

Conference

Conference16th Conference of the European Chapter of the Associationfor Computational Linguistics 2021
Abbreviated titleEACL 2021
CityVirtual, Online
Period19/04/2123/04/21

ASJC Scopus subject areas

  • Software
  • Computational Theory and Mathematics
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'A study of automatic metrics for the evaluation of natural language explanations'. Together they form a unique fingerprint.

Cite this