Bootstrapping incremental dialogue systems: using linguistic knowledge to learn from minimal data

Dimitris Kalatzis, Arash Eshghi, Oliver Lemon

Research output: Contribution to conferencePaperpeer-review

Abstract

We present a method for inducing new dialogue systems from very small amounts of unannotated dialogue data, showing how word-level exploration using Reinforcement Learning (RL), combined with an incremental and semantic grammar - Dynamic Syntax (DS) - allows systems to discover, generate, and understand many new dialogue variants. The method avoids the use of expensive and time-consuming dialogue act annotations, and supports more natural (incremental) dialogues than turn-based systems. Here, language generation and dialogue management are treated as a joint decision/optimisation problem, and the MDP model for RL is constructed automatically. With an implemented system, we show that this method enables a wide range of dialogue variations to be automatically captured, even when the system is trained from only a single dialogue. The variants include question-answer pairs, over- and under-answering, self- and other-corrections, clarification interaction, split-utterances, and ellipsis. This generalisation property results from the structural knowledge and constraints present within the DS grammar, and highlights some limitations of recent systems built using machine learning techniques only.
Original languageEnglish
Publication statusPublished - 2016
Event30th Conference on Neural Information Processing Systems 2016 - Barcelona, Spain
Duration: 5 Dec 201610 Dec 2016

Conference

Conference30th Conference on Neural Information Processing Systems 2016
Abbreviated titleNIPS 2016
Country/TerritorySpain
CityBarcelona
Period5/12/1610/12/16

Fingerprint

Dive into the research topics of 'Bootstrapping incremental dialogue systems: using linguistic knowledge to learn from minimal data'. Together they form a unique fingerprint.

Cite this