Abstract
Traditional automatic evaluation measures for natural language generation (NLG) use costly human-authored references to estimate the quality of a system output. In this paper, we propose a referenceless quality estimation (QE) approach based on recurrent neural networks, which predicts a quality score for a NLG system output by comparing it to the source meaning representation only. Our method outperforms traditional metrics and a constant baseline in most respects; we also show that synthetic data helps to increase correlation results by 21% compared to the base system. Our results are comparable to results obtained in similar QE tasks despite the more challenging setting.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017 |
| Publisher | ICML |
| Publication status | Published - 10 Aug 2017 |
| Event | 1st Workshop on Learning to Generate Natural Language - ICML Conference, International Convention Centre, Sydney, Australia Duration: 10 Aug 2017 → 10 Aug 2017 Conference number: 1 https://sites.google.com/site/langgen17/accepted-papers |
Workshop
| Workshop | 1st Workshop on Learning to Generate Natural Language |
|---|---|
| Abbreviated title | LGNL |
| Country/Territory | Australia |
| City | Sydney |
| Period | 10/08/17 → 10/08/17 |
| Internet address |
Fingerprint
Dive into the research topics of 'Referenceless Quality Estimation for Natural Language Generation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver