Comparing attribute classifiers for interactive language grounding

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)


We address the problem of interactively learning perceptually grounded word meanings in a multimodal dialogue system. We design a semantic and visual processing system to support this and illustrate how they can be integrated. We then focus on comparing the performance (Precision, Recall, F1, AUC) of three state-of-the-art attribute classifiers for the purpose of interactive language grounding (MLKNN, DAP, and SVMs), on the aPascal-aYahoo datasets. In prior work, results were presented for object classification using these methods for attribute labelling, whereas we focus on their performance for attribute labelling itself. We find that while these methods can perform well for some of the attributes (e.g. head, ears, furry) none of these models has good performance over the whole attribute set, and none supports incremental learning. This leads us to suggest directions for future work.
Original languageEnglish
Title of host publicationProceedings of the 4th Workshop on Vision and Language (VL'15)
Number of pages10
Publication statusPublished - 2015
Event4th Workshop on Vision and Language - Portugal, Lisbon, Portugal
Duration: 18 Sept 2015 → …


Conference4th Workshop on Vision and Language
Abbreviated titleVL'15
Period18/09/15 → …


Dive into the research topics of 'Comparing attribute classifiers for interactive language grounding'. Together they form a unique fingerprint.

Cite this