Abstract
Neural natural language generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input specification. In this paper, we show the impact of semantic noise on state-of-the-art NNLG models which implement different semantic control mechanisms. We find that cleaned data can improve semantic correctness by up to 97%, while maintaining fluency. We also find that the most common error is omitting information, rather than hallucination.
Original language | English |
---|---|
Title of host publication | Proceedings of the 12th International Conference on Natural Language Generation |
Publisher | Association for Computational Linguistics |
Pages | 421–426 |
Number of pages | 6 |
ISBN (Electronic) | 9781950737949 |
DOIs | |
Publication status | Published - 2019 |
Event | 12th International Conference on Natural Language Generation 2019 - Tokyo, Japan Duration: 28 Oct 2019 → 1 Nov 2019 |
Conference
Conference | 12th International Conference on Natural Language Generation 2019 |
---|---|
Country/Territory | Japan |
City | Tokyo |
Period | 28/10/19 → 1/11/19 |