Hierarchical reinforcement learning for situated natural language generation

Nina Dethlefs, Heriberto Cuayáhuitl

Research output: Contribution to journalArticle

12 Citations (Scopus)

Abstract

Natural Language Generation systems in interactive settings often face a multitude of choices, given that the communicative effect of each utterance they generate depends crucially on the interplay between its physical circumstances, addressee and interaction history. This is particularly true in interactive and situated settings. In this paper we present a novel approach for situated Natural Language Generation in dialogue that is based on hierarchical reinforcement learning and learns the best utterance for a context by optimisation through trial and error. The model is trained from human-human corpus data and learns particularly to balance the trade-off between efficiency and detail in giving instructions: the user needs to be given sufficient information to execute their task, but without exceeding their cognitive load. We present results from simulation and a task-based human evaluation study comparing two different versions of hierarchical reinforcement learning: One operates using a hierarchy of policies with a large state space and local knowledge, and the other additionally shares knowledge across generation subtasks to enhance performance. Results show that sharing knowledge across subtasks achieves better performance than learning in isolation, leading to smoother and more successful interactions that are better perceived by human users.

Original languageEnglish
Pages (from-to)391-435
Number of pages45
JournalNatural Language Engineering
Volume21
Issue number3
DOIs
Publication statusPublished - May 2015

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Language and Linguistics
  • Linguistics and Language

Fingerprint Dive into the research topics of 'Hierarchical reinforcement learning for situated natural language generation'. Together they form a unique fingerprint.

Cite this