Abstract
Learning with minimal data is one of the key challenges in the development of practical, production-ready goal-oriented dialogue systems. In a real-world enterprise setting where dialogue systems are developed rapidly and are expected to work robustly for an ever-growing variety of domains, products, and scenarios, efficient learning from a limited number of examples becomes indispensable. In this paper, we introduce a technique to achieve state-of-the-art dialogue generation performance in a few-shot setup, without using any annotated data. We do this by leveraging background knowledge from a larger, more highly represented dialogue source — namely, the MetaLWOz dataset. We evaluate our model on the Stanford Multi-Domain Dialogue Dataset, consisting of human-human goal-oriented dialogues in in-car navigation, appointment scheduling, and weather information domains. We show that our few-shot approach achieves state-of-the art results on that dataset by consistently outperforming the previous best model in terms of BLEU and Entity F1 scores, while being more data-efficient by not requiring any data annotation.
Original language | English |
---|---|
Title of host publication | Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue |
Publisher | Association for Computational Linguistics |
Pages | 32-39 |
Number of pages | 8 |
ISBN (Electronic) | 9781950737611 |
DOIs | |
Publication status | Published - Sept 2019 |
Event | 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue 2019 - Stockholm, Sweden Duration: 11 Sept 2019 → 13 Sept 2019 |
Conference
Conference | 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue 2019 |
---|---|
Abbreviated title | SIGDIAL 2019 |
Country/Territory | Sweden |
City | Stockholm |
Period | 11/09/19 → 13/09/19 |
ASJC Scopus subject areas
- Computer Graphics and Computer-Aided Design
- Computer Vision and Pattern Recognition
- Human-Computer Interaction
- Modelling and Simulation