Adversarial Textual Robustness of Visual Dialog

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)
141 Downloads (Pure)

Abstract

Adversarial robustness evaluates the worst-case performance scenario of a machine learning model to ensure its safety and reliability. For example, cases where the user input contains a minimal change, e.g. a synonym, which causes the previously correct model to return a wrong answer. Using this scenario, this study is the first to investigate the robustness of visually grounded dialog models towards textual attacks. We first aim to understand how multimodal input components contribute to model robustness. Our results show that models which encode dialog history are more robust by providing redundant information. This is in contrast to prior work which finds that dialog history is negligible for model performance on this task. We also evaluate how to generate adversarial test examples which successfully fool the model but remain undetected by the user/software designer. Our analysis shows that the textual, as well as the visual context are important to generate plausible attacks.

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: ACL 2023
PublisherAssociation for Computational Linguistics
Pages3422-3438
Number of pages17
ISBN (Electronic)9781959429623
DOIs
Publication statusPublished - Jul 2023
Event61st Annual Meeting of the Association for Computational Linguistics 2023 - Toronto, Canada
Duration: 9 Jul 202314 Jul 2023

Conference

Conference61st Annual Meeting of the Association for Computational Linguistics 2023
Abbreviated titleACL 2023
Country/TerritoryCanada
CityToronto
Period9/07/2314/07/23

ASJC Scopus subject areas

  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint

Dive into the research topics of 'Adversarial Textual Robustness of Visual Dialog'. Together they form a unique fingerprint.

Cite this