The prototype system therefore demonstrates how to weave together a wide range of natural, daily conversations with end users that vary in complexity; from complex visual dialogue to chitchat and quiz games, to task-oriented domain-specific conversations.
We are currently able to demonstrate the system via a web-based interface. It will soon be deployed on the ARI robot in a hospital waiting room.
|Title of host publication||ICMI '21: Proceedings of the 2021 International Conference on Multimodal Interaction|
|Subtitle of host publication||Montréal QC Canada October 18 - 22, 2021|
|Editors||Zakia Hammal, Carlos Busso, Catherine Pelachaud, Sharon Oviatt, Albert Ali Salah, Guoying Zhao|
|Place of Publication||New York|
|Publisher||Association for Computing Machinery|
|Number of pages||2|
|Publication status||Published - 18 Oct 2021|
|Event||23rd ACM International Conference on Multimodal Interaction 2021 - Montreal, Canada|
Duration: 18 Oct 2021 → 22 Oct 2021
|Conference||23rd ACM International Conference on Multimodal Interaction 2021|
|Abbreviated title||ICMI 2021|
|Period||18/10/21 → 22/10/21|
- Social Dialogue
- Visual Dialogue
ASJC Scopus subject areas
- Computer Science Applications
- Computer Vision and Pattern Recognition
- Hardware and Architecture
- Human-Computer Interaction
FingerprintDive into the research topics of 'Combining Visual and Social Dialogue for Human-Robot Interaction'. Together they form a unique fingerprint.
Dataset supporting the paper "Am I Allergic to This? Assisting Sight Impaired People in the Kitchen"
Brick, E. R. (Creator), Alonso, V. C. (Creator), O'Brien, C. (Creator), Tong, S. (Creator), Tavernier, E. (Creator), Parekh, A. (Creator), Addlesee, J. (Creator) & Lemon, O. (Creator), Heriot-Watt University, Oct 2021