Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature Distillation

Malvina Nikandrou, Georgios Pantazopoulos, Ioannis Konstas, Alessandro Suglia

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Downloads (Pure)

Abstract

Continual learning focuses on incrementally training a model on a sequence of tasks with the aim of learning new tasks while minimizing performance drop on previous tasks. Existing approaches at the intersection of Continual Learning and Visual Question Answering (VQA) do not study how the multimodal nature of the input affects the learning dynamics of a model. In this paper, we demonstrate that each modality evolves at different rates across a continuum of tasks and that this behavior occurs in established encoder-only models as well as modern recipes for developing Vision & Language (VL) models. Motivated by this observation, we propose a modality-aware feature distillation (MAFED) approach which outperforms existing baselines across models of varying scale in three multimodal continual learning settings. Furthermore, we provide ablations showcasing that modality-aware distillation complements experience replay. Overall, our results emphasize the importance of addressing modality-specific dynamics to prevent forgetting in multimodal continual learning.

Original languageEnglish
Title of host publicationProceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)
PublisherAssociation for Computational Linguistics
Pages73-85
Number of pages13
ISBN (Electronic)9798891761537
DOIs
Publication statusPublished - Aug 2024
Event3rd Workshop on Advances in Language and Vision Research 2024 - Bangkok, Thailand
Duration: 16 Aug 2024 → …

Conference

Conference3rd Workshop on Advances in Language and Vision Research 2024
Abbreviated titleALVR 2024
Country/TerritoryThailand
CityBangkok
Period16/08/24 → …

ASJC Scopus subject areas

  • Language and Linguistics
  • Computer Science Applications
  • Software
  • Ophthalmology
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature Distillation'. Together they form a unique fingerprint.

Cite this