Multitask Multimodal Prompted Training for Interactive Embodied Task Completion

Georgios Pantazopoulos, Malvina Nikandrou, Amit Parekh, Bhathiya Hemanthage, Arash Eshghi, Ioannis Konstas, Verena Rieser, Oliver Lemon, Alessandro Suglia

Research output: Contribution to conferencePaperpeer-review

4 Citations (Scopus)
20 Downloads (Pure)

Abstract

Interactive and embodied tasks pose at least two fundamental challenges to existing Vision Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation. To tackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a unified encoder-decoder model that reasons over images and trajectories, and casts action prediction as multimodal text generation. By unifying all tasks as text generation, EMMA learns a language of actions which facilitates transfer across tasks. Different to previous modular approaches with independently trained components, we use a single multitask model where each task contributes to goal completion. EMMA performs on par with similar models on several VL benchmarks and sets a new state-of-the-art performance (36.81% success rate) on the Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guided agents in the Alexa Arena.
Original languageEnglish
Pages768-789
Number of pages22
DOIs
Publication statusPublished - Dec 2023
EventConference on Empirical Methods in Natural Language Processing 2023 - , Singapore
Duration: 6 Dec 202310 Dec 2023
https://2023.emnlp.org/

Conference

ConferenceConference on Empirical Methods in Natural Language Processing 2023
Abbreviated titleEMNLP 2023
Country/TerritorySingapore
Period6/12/2310/12/23
Internet address

Fingerprint

Dive into the research topics of 'Multitask Multimodal Prompted Training for Interactive Embodied Task Completion'. Together they form a unique fingerprint.

Cite this