A Visually-Aware Conversational Robot Receptionist

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

1 Citation (Scopus)

Abstract

Socially Assistive Robots (SARs) have the potential to play an increasingly important role in a variety of contexts including healthcare, but most existing systems have very limited interactive capabilities. We will demonstrate a robot receptionist that not only supports task-based and social dialogue via natural spoken conversation but is also capable of visually grounded dialogue; able to perceive and discuss the shared physical environment (e.g. helping users to locate personal belongings or objects of interest). Task-based dialogues include check-in, navigation and FAQs about facilities, alongside social features such as chit-chat, access to the latest news and a quiz game to play while waiting.
We also show how visual context (objects and their spatial relations) can be combined with linguistic representations of dialogue context, to support visual dialogue and question answering.
We will demonstrate the system on a humanoid ARI robot, which is being deployed in a hospital reception area.
Original languageEnglish
Title of host publicationProceedings of SIGdial'22: 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
PublisherAssociation for Computational Linguistics
Pages645-648
Number of pages4
ISBN (Print) 978-1-955917-66-7
DOIs
Publication statusPublished - 7 Sept 2022

Keywords

  • Visual Dialogue
  • HRI
  • Robot Receptionist
  • Dialogue systems

Fingerprint

Dive into the research topics of 'A Visually-Aware Conversational Robot Receptionist'. Together they form a unique fingerprint.

Cite this