Abstract
Visual Dialog involves “understanding” the dialog history (what has been discussed previously) and the current question (what is asked), in addition to grounding information in the image, to generate the correct response. In this paper, we show that co-attention models which explicitly encode dialog history outperform models that don't, achieving state-of-the-art performance (72 % NDCG on val set). However, we also expose shortcomings of the crowd-sourcing dataset collection procedure by showing that history is indeed only required for a small amount of the data and that the current evaluation metric encourages generic replies. To that end, we propose a challenging subset (VisDialConv) of the VisDial val set and provide a benchmark of 63% NDCG.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics |
| Editors | Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault |
| Publisher | Association for Computational Linguistics |
| Pages | 8182-8197 |
| Number of pages | 16 |
| ISBN (Electronic) | 9781952148255 |
| DOIs | |
| Publication status | Published - Jul 2020 |
| Event | 58th Annual Meeting of the Association for Computational Linguistics 2020 - Virtual, Online, United States Duration: 5 Jul 2020 → 10 Jul 2020 |
Conference
| Conference | 58th Annual Meeting of the Association for Computational Linguistics 2020 |
|---|---|
| Abbreviated title | ACL 2020 |
| Country/Territory | United States |
| City | Virtual, Online |
| Period | 5/07/20 → 10/07/20 |
ASJC Scopus subject areas
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics