Gesture Generation from Trimodal Context for Humanoid Robots

Shiyi Tang, Christian Dondrup

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Natural co-speech gestures are essential components to improve the experience of Human-robot interaction (HRI). However, current gesture generation approaches have many limitations of not being natural, not aligning with the speech and content, or the lack of diverse speaker styles. Therefore, this work aims to repoduce the work by [5] generating natural gestures in simulation based on tri-modal inputs and apply this to a robot. During evaluation, “motion variance” and “Frechet Gesture Distance (FGD)” is employed to evaluate the performance objectively. Then, human participants were recruited to subjectively evaluate the gestures. Results show that the movements in that paper have been successfully transferred to the robot and the gestures have diverse styles and are correlated with the speech. Moreover, there is a significant likeability and style difference between different gestures.
Original languageEnglish
Title of host publicationHAI '24: Proceedings of the 12th International Conference on Human-Agent Interaction
PublisherAssociation for Computing Machinery
Pages426-428
Number of pages3
ISBN (Print)9798400711787
DOIs
Publication statusPublished - 24 Nov 2024
Event12th International Conference on Human-Agent Interaction 2024
- Swansea University, Swansea, United Kingdom
Duration: 24 Nov 202427 Nov 2024
https://hai-conference.net/hai2024/

Conference

Conference12th International Conference on Human-Agent Interaction 2024
Country/TerritoryUnited Kingdom
CitySwansea
Period24/11/2427/11/24
Internet address

Fingerprint

Dive into the research topics of 'Gesture Generation from Trimodal Context for Humanoid Robots'. Together they form a unique fingerprint.

Cite this