SimpleMTOD: A Simple Language Model for Multimodal Task-Oriented Dialogue with Symbolic Scene Representation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

21 Downloads (Pure)

Abstract

SimpleMTOD is a simple language model which recasts several sub-tasks in multimodal task-oriented dialogues as sequence prediction tasks. SimpleMTOD is built on a large-scale transformer-based auto-regressive architecture, which has already proven to be successful in uni-modal task-oriented dialogues, and effectively leverages transfer learning from pretrained GPT-2. In-order to capture the semantics of visual scenes, we introduce both local and de-localized tokens for objects within a scene. De-localized tokens represent the type of an object rather than the specific object itself and so possess a consistent meaning across the dataset. SimpleMTOD achieves a state-of-the-art BLEU score (0.327) in the Response Generation sub-task of the SIMMC 2.0 test-std dataset while performing on par in other multimodal sub-tasks: Disambiguation, Coreference Resolution, and Dialog State Tracking. This is despite taking a minimalist approach for extracting visual (and non-visual) informa- tion. In addition the model does not rely on task-specific architectural changes such as classification heads.
Original languageEnglish
Title of host publicationProceedings of the 15th International Conference on Computational Semantics
Place of PublicationNancy, France
PublisherAssociation for Computational Linguistics
Pages293–304
Number of pages12
ISBN (Electronic)9781959429746
Publication statusPublished - Jun 2023

Keywords

  • cs.CL
  • cs.LG

Fingerprint

Dive into the research topics of 'SimpleMTOD: A Simple Language Model for Multimodal Task-Oriented Dialogue with Symbolic Scene Representation'. Together they form a unique fingerprint.

Cite this