Abstract
We investigate an end-to-end method for automatically inducing task-based dialogue systems from small amounts of unannotated dialogue data. The method combines an incremental, semantic grammar formalism - Dynamic Syntax (DS) and Type Theory with Records (DS-TTR) with Reinforcement Learning (RL), where language generation and dialogue management are treated as one and the same decision problem. The systems thus produced are incremental: dialogues are processed word-by-word, shown in prior work to be essential in supporting more natural, spontaneous dialogue. We hypothesised that given the rich linguistic knowledge present within the grammar, our model should enable a combinatorially large number of interactional variations to be processed, even when the system is trained from only a few dialogues. Our experiments show that our model can process 70% of the Facebook AI bAbI data-set - a set of unannotated dialogues in a ‘restaurant-search’ domain even when trained on only 0.13% of the data-set (5 dialogues). This remarkable generalisation property results from the structural knowledge and constraints present within the grammar, and highlights limitations of recent state-of-the-art systems that are built using machine learning techniques only.
Original language | English |
---|---|
Title of host publication | Proceedings of the Conference on Logic and Machine Learning in Natural Language (LaML 2017) |
Editors | Simon Dobnik, Shalom Lappin |
Pages | 79-84 |
Publication status | Published - Jun 2017 |
Event | Conference on Logic and Machine Learning in Natural Language 2017 - Gothenburg, Sweden Duration: 12 Jun 2017 → 13 Jun 2017 https://clasp.gu.se/news-events/conference-on-logic-and-machine-learning-in-natural-language--laml-/ |
Conference
Conference | Conference on Logic and Machine Learning in Natural Language 2017 |
---|---|
Abbreviated title | LaML 2017 |
Country/Territory | Sweden |
City | Gothenburg |
Period | 12/06/17 → 13/06/17 |
Internet address |