Multisensory Learning Framework for Robot Drumming

Andrey Barsky, Claudio Zito, Hiroki Mori, Tetsuya Ogata, Jeremy L. Wyatt

Research output: Contribution to conferencePaperpeer-review

Abstract

The hype about sensorimotor learning is currently reaching high fever, thanks to the latest advancement in deep learning. In this paper, we present an open-source framework for collecting large-scale, time-synchronised synthetic data from highly disparate sensory modalities, such as audio, video, and proprioception, for learning robot manipulation tasks. We demonstrate the learning of non-linear sensorimotor mappings for a humanoid drumming robot that generates novel motion sequences from desired audio data using cross-modal correspondences. We evaluate our system through the quality of its cross-modal retrieval, for generating suitable motion sequences to match desired unseen audio or video sequences.
Original languageEnglish
Publication statusPublished - 2018
Event2nd Workshop on Crossmodal Learning for Intelligent Robotics 2018 - Madrid, Spain
Duration: 5 Oct 2018 → …

Workshop

Workshop2nd Workshop on Crossmodal Learning for Intelligent Robotics 2018
Abbreviated titleCLIR'18
Country/TerritorySpain
CityMadrid
Period5/10/18 → …

Fingerprint

Dive into the research topics of 'Multisensory Learning Framework for Robot Drumming'. Together they form a unique fingerprint.

Cite this