Abstract
Co-speech gestures enhance both human-human and human-robot interactions. This paper examines the efficacy of a data-driven approach for generating synchronised co-speech gestures in three social robots to improve social interactions. Building on a sequence-to-sequence model, which maps speech to gestures [21], this work uses the Talking With Hands 16.2M dataset [11] to generate natural gestures for face-to-face conversations. Additionally, we address synchronisation issues identified in the original study. The model’s generality is tested on three robots—NAO, Pepper, and ARI. Objective and subjective evaluations, confirm that a data-driven approach effectively generates synchronised co-speech gestures.
| Original language | English |
|---|---|
| Title of host publication | HAI '24: Proceedings of the 12th International Conference on Human-Agent Interaction |
| Publisher | Association for Computing Machinery |
| Pages | 453-455 |
| Number of pages | 3 |
| ISBN (Print) | 9798400711787 |
| DOIs | |
| Publication status | Published - 24 Nov 2024 |
| Event | 12th International Conference on Human-Agent Interaction 2024 - Swansea University, Swansea, United Kingdom Duration: 24 Nov 2024 → 27 Nov 2024 https://hai-conference.net/hai2024/ |
Conference
| Conference | 12th International Conference on Human-Agent Interaction 2024 |
|---|---|
| Country/Territory | United Kingdom |
| City | Swansea |
| Period | 24/11/24 → 27/11/24 |
| Internet address |