Abstract
Today’s conversational robots largely lack multi-party abilities to differ- entiate between human speakers, identify addressees of an utterance, understand complex social situations, or adapt their behaviour accordingly. Crucially, no cor- pus exists to evaluate whether existing multi-party systems can track individual or shared goals of multiple users. To address these issues we require realistic data. We therefore describe and motivate a new data collection design for eliciting com- plex and natural multi-party conversations with a social robot. Prior work on dyadic data collections between single humans and robots focus on utterances directed at robots, but for multi-party conversation we also need observations of humans speak- ing to each other. Our design therefore focuses on eliciting conversation between all participants, and particularly those in which participants have different goals and information. Acted role-play interactions are often scripted and can therefore yield unrealistic data, so instead our design uses pictograms for task stimuli, leading to more realistic and spontaneous multi-party dialogue. Using this design, we have collected multi-party data with an ARI humanoid robot and older adults visiting a hospital. We describe the annotation scheme and introduce the multi-party goal state tracking task which we will release in future work.
Original language | English |
---|---|
Number of pages | 10 |
Publication status | Published - 22 Feb 2023 |
Event | International Workshop on Spoken Dialogue Systems Technology 2023 - University of Southern California Institute for Creative Technologies, Los Angeles, United States Duration: 21 Feb 2023 → 24 Feb 2023 |
Conference
Conference | International Workshop on Spoken Dialogue Systems Technology 2023 |
---|---|
Abbreviated title | IWSDS 2023 |
Country/Territory | United States |
City | Los Angeles |
Period | 21/02/23 → 24/02/23 |