Learning With Stochastic Guidance for Robot Navigation

Linhai Xie, Yishu Miao, Sen Wang, Phil Blunsom, Zhihua Wang, Changhao Cheng, Andrew Markham, Niki Trigoni

Research output: Contribution to journalArticlepeer-review

1 Downloads (Pure)

Abstract

Due to the sparse rewards and high degree of environmental variation, reinforcement learning approaches, such as deep deterministic policy gradient (DDPG), are plagued by issues of high variance when applied in complex real-world environments. We present a new framework for overcoming these issues by incorporating a stochastic switch, allowing an agent to choose between high- and low-variance policies. The stochastic switch can be jointly trained with the original DDPG in the same framework. In this article, we demonstrate the power of the framework in a navigation task, where the robot can dynamically choose to learn through exploration or to use the output of a heuristic controller as guidance. Instead of starting from completely random actions, the navigation capability of a robot can be quickly bootstrapped by several simple independent controllers. The experimental results show that with the aid of stochastic guidance, we are able to effectively and efficiently train DDPG navigation policies and achieve significantly better performance than state-of-the-art baseline models.
Original languageEnglish
Pages (from-to)166-176
Number of pages11
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume32
Issue number1
Early online date23 Mar 2020
DOIs
Publication statusPublished - Jan 2021

Keywords

  • Deep deterministic policy gradient (DDPG)
  • REINFORCE
  • deep reinforcement learning (DRL)
  • robot navigation

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Learning With Stochastic Guidance for Robot Navigation'. Together they form a unique fingerprint.

Cite this