A Knowledge-Grounded Multimodal Search-Based Conversational Agent

Shubham Agarwal, Ondrej Dusek, Ioannis Konstas, Verena Rieser

Research output: Chapter in Book/Report/Conference proceedingConference contribution

17 Citations (Scopus)

Abstract

Multimodal search-based dialogue is a challenging new task: It extends visually grounded question answering systems into multi-turn conversations with access to an external database. We address this new challenge by learning a neural response generation system from the recently released Multimodal Dialogue (MMD) dataset (Saha et al., 2017). We introduce a knowledge-grounded multimodal conversational model where an encoded knowledge base (KB) representation is appended to the decoder input. Our model substantially outperforms strong baselines in terms of text-based similarity measures (over 9 BLEU points, 3 of which are solely due to the use of additional information from the KB).
Original languageEnglish
Title of host publicationProceedings of the 2018 EMNLP Workshop SCAI
Subtitle of host publicationThe 2nd International Workshop on Search-Oriented Conversational AI
PublisherAssociation for Computational Linguistics
Pages59-66
Number of pages8
ISBN (Electronic)9781948087759
Publication statusPublished - 31 Oct 2018
Event2nd International Workshop on Search-Oriented Conversational AI - Brussels, Belgium
Duration: 31 Oct 201831 Oct 2018

Workshop

Workshop2nd International Workshop on Search-Oriented Conversational AI
Country/TerritoryBelgium
CityBrussels
Period31/10/1831/10/18

Fingerprint

Dive into the research topics of 'A Knowledge-Grounded Multimodal Search-Based Conversational Agent'. Together they form a unique fingerprint.

Cite this