Fact-based content weighting for evaluating abstractive summarisation

Xinnuo Xu, Ondřej Dušek, Jingyi Li, Verena Rieser, Ioannis Konstas

Research output: Chapter in Book/Report/Conference proceedingConference contribution

17 Citations (Scopus)

Abstract

Abstractive summarisation is notoriously hard to evaluate since standard word-overlap-based metrics are biased towards specific words in the human reference. We introduce a new evaluation metric which abstracts away from the word-level and instead is based on fact-level content weighting, i.e. relating the facts of the document to the facts of the summary. We follow the assumption that a good summary will reflect all relevant facts, i.e. the ones present in the ground truth (human-generated reference summary). We confirm this hypothesis by showing that our weightings are highly correlated to human perception and compare favourably to the recent manual highlight-based metric of Hardy et al. (2019).
Original languageEnglish
Title of host publicationProceedings of the 58th Annual Meeting of the Association for Computational Linguistics
EditorsDan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
PublisherAssociation for Computational Linguistics
Pages5071-5081
Number of pages11
ISBN (Electronic)9781952148255
DOIs
Publication statusPublished - Jul 2020
Event58th Annual Meeting of the Association for Computational Linguistics 2020 - Virtual, Online, United States
Duration: 5 Jul 202010 Jul 2020

Conference

Conference58th Annual Meeting of the Association for Computational Linguistics 2020
Abbreviated titleACL 2020
Country/TerritoryUnited States
CityVirtual, Online
Period5/07/2010/07/20

ASJC Scopus subject areas

  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint

Dive into the research topics of 'Fact-based content weighting for evaluating abstractive summarisation'. Together they form a unique fingerprint.

Cite this