Learning Mobile Manipulation through Deep Reinforcement Learning

Cong Wang, Qifeng Zhang, Qiyan Tian, Shuo Li, Xiaohui Wang, David Lane, Yvan Petillot, Sen Wang

Research output: Contribution to journalArticlepeer-review

64 Citations (Scopus)
203 Downloads (Pure)

Abstract

Mobile manipulation has a broad range of applications in robotics. However, it is usually more challenging than fixed-base manipulation due to the complex coordination of a mobile base and a manipulator. Although recent works have demonstrated that deep reinforcement learning is a powerful technique for fixed-base manipulation tasks, most of them are not applicable to mobile manipulation. This paper investigates how to leverage deep reinforcement learning to tackle whole-body mobile manipulation tasks in unstructured environments using only on-board sensors. A novel mobile manipulation system which integrates the state-of-the-art deep reinforcement learning algorithms with visual perception is proposed. It has an efficient framework decoupling visual perception from the deep reinforcement learning control, which enables its generalization from simulation training to real-world testing. Extensive simulation and experiment results show that the proposed mobile manipulation system is able to grasp different types of objects autonomously in various simulation and real-world scenarios, verifying the effectiveness of the proposed mobile manipulation system.

Original languageEnglish
Article number939
JournalSensors
Volume20
Issue number3
DOIs
Publication statusPublished - 10 Feb 2020

Keywords

  • Deep learning
  • Deep reinforcement learning
  • Mobile manipulation

ASJC Scopus subject areas

  • Analytical Chemistry
  • Biochemistry
  • Atomic and Molecular Physics, and Optics
  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Learning Mobile Manipulation through Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this