Semantic Noise Matters for Neural Natural Language Generation

Ondrej Dusek, David Howcroft, Verena Rieser

Research output: Chapter in Book/Report/Conference proceedingConference contribution

40 Downloads (Pure)

Abstract

Neural natural language generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input specification. In this paper, we show the impact of semantic noise on state-of-the-art NNLG models which implement different semantic control mechanisms. We find that cleaned data can improve semantic correctness by up to 97%, while maintaining fluency. We also find that the most common error is omitting information, rather than hallucination.
Original languageEnglish
Title of host publicationProceedings of the 12th International Conference on Natural Language Generation
PublisherAssociation for Computational Linguistics
Pages421–426
Number of pages6
ISBN (Electronic)9781950737949
DOIs
Publication statusPublished - 2019
Event12th International Conference on Natural Language Generation 2019 - Tokyo, Japan
Duration: 28 Oct 20191 Nov 2019

Conference

Conference12th International Conference on Natural Language Generation 2019
Country/TerritoryJapan
CityTokyo
Period28/10/191/11/19

Fingerprint

Dive into the research topics of 'Semantic Noise Matters for Neural Natural Language Generation'. Together they form a unique fingerprint.

Cite this