Data Collection for Multi-party Task-based Dialogue in Social Robotics

Research output: Contribution to conferencePaperpeer-review

264 Downloads (Pure)


Today’s conversational robots largely lack multi-party abilities to differ- entiate between human speakers, identify addressees of an utterance, understand complex social situations, or adapt their behaviour accordingly. Crucially, no cor- pus exists to evaluate whether existing multi-party systems can track individual or shared goals of multiple users. To address these issues we require realistic data. We therefore describe and motivate a new data collection design for eliciting com- plex and natural multi-party conversations with a social robot. Prior work on dyadic data collections between single humans and robots focus on utterances directed at robots, but for multi-party conversation we also need observations of humans speak- ing to each other. Our design therefore focuses on eliciting conversation between all participants, and particularly those in which participants have different goals and information. Acted role-play interactions are often scripted and can therefore yield unrealistic data, so instead our design uses pictograms for task stimuli, leading to more realistic and spontaneous multi-party dialogue. Using this design, we have collected multi-party data with an ARI humanoid robot and older adults visiting a hospital. We describe the annotation scheme and introduce the multi-party goal state tracking task which we will release in future work.
Original languageEnglish
Number of pages10
Publication statusPublished - 22 Feb 2023
EventInternational Workshop on Spoken Dialogue Systems Technology 2023 - University of Southern California Institute for Creative Technologies, Los Angeles, United States
Duration: 21 Feb 202324 Feb 2023


ConferenceInternational Workshop on Spoken Dialogue Systems Technology 2023
Abbreviated titleIWSDS 2023
Country/TerritoryUnited States
CityLos Angeles


Dive into the research topics of 'Data Collection for Multi-party Task-based Dialogue in Social Robotics'. Together they form a unique fingerprint.

Cite this