Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge

Ondřej Dušek, Jekaterina Novikova, Verena Rieser

Research output: Contribution to journalArticle

Abstract

This paper provides a comprehensive analysis of the first shared task on End-to-End Natural Language Generation (NLG) and identifies avenues for future research based on the results. This shared task aimed to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena. Introducing novel automatic and human metrics, we compare 62 systems submitted by 17 institutions, covering a wide range of approaches, including machine learning architectures – with the majority implementing sequence-to-sequence models (seq2seq) – as well as systems based on grammatical rules and templates. Seq2seq-based systems have demonstrated a great potential for NLG in the challenge. We find that seq2seq systems generally score high in terms of word-overlap metrics and human evaluations of naturalness – with the winning SLUG system (Juraska et al., 2018) being seq2seq-based. However, vanilla seq2seq models often fail to correctly express a given meaning representation if they lack a strong semantic control mechanism applied during decoding. Moreover, seq2seq models can be outperformed by hand-engineered systems in terms of overall quality, as well as complexity, length and diversity of outputs. This research has influenced, inspired and motivated a number of recent studies outwith the original competition, which we also summarise as part of this paper.

LanguageEnglish
Pages123-156
Number of pages34
JournalComputer Speech and Language
Volume59
Early online date3 Jul 2019
DOIs
Publication statusE-pub ahead of print - 3 Jul 2019

Fingerprint

Natural Language Generation
Syntactics
Decoding
Learning systems
Semantics
Metric
Output
Template
Overlap
Machine Learning
Covering
Express
Model
Evaluation

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Human-Computer Interaction

Cite this

@article{fe8de9ee4c734ed688d526d8e0b99eab,
title = "Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge",
abstract = "This paper provides a comprehensive analysis of the first shared task on End-to-End Natural Language Generation (NLG) and identifies avenues for future research based on the results. This shared task aimed to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena. Introducing novel automatic and human metrics, we compare 62 systems submitted by 17 institutions, covering a wide range of approaches, including machine learning architectures – with the majority implementing sequence-to-sequence models (seq2seq) – as well as systems based on grammatical rules and templates. Seq2seq-based systems have demonstrated a great potential for NLG in the challenge. We find that seq2seq systems generally score high in terms of word-overlap metrics and human evaluations of naturalness – with the winning SLUG system (Juraska et al., 2018) being seq2seq-based. However, vanilla seq2seq models often fail to correctly express a given meaning representation if they lack a strong semantic control mechanism applied during decoding. Moreover, seq2seq models can be outperformed by hand-engineered systems in terms of overall quality, as well as complexity, length and diversity of outputs. This research has influenced, inspired and motivated a number of recent studies outwith the original competition, which we also summarise as part of this paper.",
author = "Ondřej Dušek and Jekaterina Novikova and Verena Rieser",
year = "2019",
month = "7",
day = "3",
doi = "10.1016/j.csl.2019.06.009",
language = "English",
volume = "59",
pages = "123--156",
journal = "Computer Speech and Language",
issn = "0885-2308",
publisher = "Academic Press Inc.",

}

Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge. / Dušek, Ondřej; Novikova, Jekaterina; Rieser, Verena.

In: Computer Speech and Language, Vol. 59, 01.2020, p. 123-156.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge

AU - Dušek, Ondřej

AU - Novikova, Jekaterina

AU - Rieser, Verena

PY - 2019/7/3

Y1 - 2019/7/3

N2 - This paper provides a comprehensive analysis of the first shared task on End-to-End Natural Language Generation (NLG) and identifies avenues for future research based on the results. This shared task aimed to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena. Introducing novel automatic and human metrics, we compare 62 systems submitted by 17 institutions, covering a wide range of approaches, including machine learning architectures – with the majority implementing sequence-to-sequence models (seq2seq) – as well as systems based on grammatical rules and templates. Seq2seq-based systems have demonstrated a great potential for NLG in the challenge. We find that seq2seq systems generally score high in terms of word-overlap metrics and human evaluations of naturalness – with the winning SLUG system (Juraska et al., 2018) being seq2seq-based. However, vanilla seq2seq models often fail to correctly express a given meaning representation if they lack a strong semantic control mechanism applied during decoding. Moreover, seq2seq models can be outperformed by hand-engineered systems in terms of overall quality, as well as complexity, length and diversity of outputs. This research has influenced, inspired and motivated a number of recent studies outwith the original competition, which we also summarise as part of this paper.

AB - This paper provides a comprehensive analysis of the first shared task on End-to-End Natural Language Generation (NLG) and identifies avenues for future research based on the results. This shared task aimed to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena. Introducing novel automatic and human metrics, we compare 62 systems submitted by 17 institutions, covering a wide range of approaches, including machine learning architectures – with the majority implementing sequence-to-sequence models (seq2seq) – as well as systems based on grammatical rules and templates. Seq2seq-based systems have demonstrated a great potential for NLG in the challenge. We find that seq2seq systems generally score high in terms of word-overlap metrics and human evaluations of naturalness – with the winning SLUG system (Juraska et al., 2018) being seq2seq-based. However, vanilla seq2seq models often fail to correctly express a given meaning representation if they lack a strong semantic control mechanism applied during decoding. Moreover, seq2seq models can be outperformed by hand-engineered systems in terms of overall quality, as well as complexity, length and diversity of outputs. This research has influenced, inspired and motivated a number of recent studies outwith the original competition, which we also summarise as part of this paper.

UR - http://www.scopus.com/inward/record.url?scp=85070102543&partnerID=8YFLogxK

U2 - 10.1016/j.csl.2019.06.009

DO - 10.1016/j.csl.2019.06.009

M3 - Article

VL - 59

SP - 123

EP - 156

JO - Computer Speech and Language

T2 - Computer Speech and Language

JF - Computer Speech and Language

SN - 0885-2308

ER -