Referenceless Quality Estimation for Natural Language Generation

Ondrej Dusek, Jekaterina Novikova, Verena Rieser

Research output: Chapter in Book/Report/Conference proceedingConference contribution

43 Downloads (Pure)

Abstract

Traditional automatic evaluation measures for natural language generation (NLG) use costly human-authored references to estimate the quality of a system output. In this paper, we propose a referenceless quality estimation (QE) approach based on recurrent neural networks, which predicts a quality score for a NLG system output by comparing it to the source meaning representation only. Our method outperforms traditional metrics and a constant baseline in most respects; we also show that synthetic data helps to increase correlation results by 21% compared to the base system. Our results are comparable to results obtained in similar QE tasks despite the more challenging setting.
Original languageEnglish
Title of host publicationProceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017
PublisherICML
Publication statusPublished - 10 Aug 2017
Event1st Workshop on Learning to Generate Natural Language - ICML Conference, International Convention Centre, Sydney, Australia
Duration: 10 Aug 201710 Aug 2017
Conference number: 1
https://sites.google.com/site/langgen17/accepted-papers

Workshop

Workshop1st Workshop on Learning to Generate Natural Language
Abbreviated titleLGNL
Country/TerritoryAustralia
CitySydney
Period10/08/1710/08/17
Internet address

Fingerprint

Dive into the research topics of 'Referenceless Quality Estimation for Natural Language Generation'. Together they form a unique fingerprint.

Cite this