ViCA: Combining visual, social, and task-oriented conversational AI in a healthcare setting

Georgios Pantazopoulos, Jeremy Bruyere, Maria-Vasiliki Nikandrou, Thibaud Boissier, Supun Hemanthage, Sachish Binha, Vidyul Shah, Christian Dondrup, Oliver Lemon

Research output: Chapter in Book/Report/Conference proceedingConference contribution

134 Downloads (Pure)

Abstract

Recent developments in computer vision and conversational systems have provided the AI community with novel perspectives towards improving the cognitive capabilities of engaging socially assistive robots. We show how to develop conversational skills for a hospital receptionist robot that incorporates social conversation based on visual information as well as task-based dialog. Fusing the traditional modular conversational system architecture with recent developments in computer vision and scene graph research, our agent (called ‘ViCA’) supports both visual question answering and social conversational capabilities based on the visual scene. In particular, our agent can provide guidance to users by locating visible objects in the room and can engage in social dialogue using visual prompts, such as the user’s clothing or possessions. We con- duct a comprehensive online evaluation study with 21 participants, showcasing that the ViCA system is perceived as both helpful and entertaining.
Original languageEnglish
Title of host publication23rd ACM International Conference on Multimodal Interaction
Publication statusAccepted/In press - 26 Jul 2021
Event23rd ACM International Conference on Multimodal Interaction 2021 - Montreal, Canada
Duration: 18 Oct 202122 Oct 2021

Conference

Conference23rd ACM International Conference on Multimodal Interaction 2021
Abbreviated titleICMI 2021
Country/TerritoryCanada
CityMontreal
Period18/10/2122/10/21

Fingerprint

Dive into the research topics of 'ViCA: Combining visual, social, and task-oriented conversational AI in a healthcare setting'. Together they form a unique fingerprint.

Cite this