Abstract
Socially Assistive Robots (SARs) have the potential to play an increasingly important role in a variety of contexts including healthcare, but most existing systems have very limited interactive capabilities. We will demonstrate a robot receptionist that not only supports task-based and social dialogue via natural spoken conversation but is also capable of visually grounded dialogue; able to perceive and discuss the shared physical environment (e.g. helping users to locate personal belongings or objects of interest). Task-based dialogues include check-in, navigation and FAQs about facilities, alongside social features such as chit-chat, access to the latest news and a quiz game to play while waiting.
We also show how visual context (objects and their spatial relations) can be combined with linguistic representations of dialogue context, to support visual dialogue and question answering.
We will demonstrate the system on a humanoid ARI robot, which is being deployed in a hospital reception area.
We also show how visual context (objects and their spatial relations) can be combined with linguistic representations of dialogue context, to support visual dialogue and question answering.
We will demonstrate the system on a humanoid ARI robot, which is being deployed in a hospital reception area.
Original language | English |
---|---|
Title of host publication | Proceedings of SIGdial'22: 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue |
Publisher | Association for Computational Linguistics |
Pages | 645-648 |
Number of pages | 4 |
ISBN (Print) | 978-1-955917-66-7 |
DOIs | |
Publication status | Published - 7 Sept 2022 |
Keywords
- Visual Dialogue
- HRI
- Robot Receptionist
- Dialogue systems