Lost in Space: Probing Fine-grained Spatial Understanding in Vision and Language Resamplers

Georgios Pantazopoulos, Alessandro Suglia, Oliver Lemon, Arash Eshghi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

An effective method for combining frozen large language models (LLM) and visual encoders involves a resampler module that creates a ‘visual prompt’ which is provided to the LLM, along with the textual prompt. While this approach has enabled impressive performance across many coarse-grained tasks like image captioning and visual question answering, (Alayrac et al., 2022; Dai et al., 2023), more fine-grained tasks that require spatial understanding have not been thoroughly examined. In this paper, we use diagnostic classifiers to measure the extent to which the visual prompt produced by the resampler encodes spatial information. Our results show that this information is largely absent from the resampler output when kept frozen during training of the classifiers. However, when the resampler and classifier are trained jointly, we observe a significant performance boost. This shows that the compression achieved by the resamplers can in principle encode the requisite spatial information, but that more object-aware objectives are needed at the pretraining stage to facilitate this capability.
Original languageEnglish
Title of host publicationProceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
PublisherAssociation for Computational Linguistics
Pages540-549
Number of pages10
Volume2
ISBN (Electronic)9798891761155
Publication statusPublished - Jun 2024

Fingerprint

Dive into the research topics of 'Lost in Space: Probing Fine-grained Spatial Understanding in Vision and Language Resamplers'. Together they form a unique fingerprint.

Cite this