TY - JOUR
T1 - Learning With Stochastic Guidance for Robot Navigation
AU - Xie, Linhai
AU - Miao, Yishu
AU - Wang, Sen
AU - Blunsom, Phil
AU - Wang, Zhihua
AU - Cheng, Changhao
AU - Markham, Andrew
AU - Trigoni, Niki
N1 - Funding Information:
Manuscript received February 1, 2019; revised July 12, 2019 and November 6, 2019; accepted February 23, 2020. Date of publication March 23, 2020; date of current version January 5, 2021. This work was supported by EPSRC Mobile Robotics: Enabling a Pervasive Technology of the Future under Grant EP/M019918/1. (Corresponding author: Linhai Xie.) Linhai Xie, Phil Blunsom, Zhihua Wang, Changhao Chen, Andrew Markham, and Niki Trigoni are with the Department of Computer Science, University of Oxford, Oxford OX1 3QD, U.K. (e-mail: [email protected]).
Publisher Copyright:
© 2012 IEEE.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2021/1
Y1 - 2021/1
N2 - Due to the sparse rewards and high degree of environmental variation, reinforcement learning approaches, such as deep deterministic policy gradient (DDPG), are plagued by issues of high variance when applied in complex real-world environments. We present a new framework for overcoming these issues by incorporating a stochastic switch, allowing an agent to choose between high- and low-variance policies. The stochastic switch can be jointly trained with the original DDPG in the same framework. In this article, we demonstrate the power of the framework in a navigation task, where the robot can dynamically choose to learn through exploration or to use the output of a heuristic controller as guidance. Instead of starting from completely random actions, the navigation capability of a robot can be quickly bootstrapped by several simple independent controllers. The experimental results show that with the aid of stochastic guidance, we are able to effectively and efficiently train DDPG navigation policies and achieve significantly better performance than state-of-the-art baseline models.
AB - Due to the sparse rewards and high degree of environmental variation, reinforcement learning approaches, such as deep deterministic policy gradient (DDPG), are plagued by issues of high variance when applied in complex real-world environments. We present a new framework for overcoming these issues by incorporating a stochastic switch, allowing an agent to choose between high- and low-variance policies. The stochastic switch can be jointly trained with the original DDPG in the same framework. In this article, we demonstrate the power of the framework in a navigation task, where the robot can dynamically choose to learn through exploration or to use the output of a heuristic controller as guidance. Instead of starting from completely random actions, the navigation capability of a robot can be quickly bootstrapped by several simple independent controllers. The experimental results show that with the aid of stochastic guidance, we are able to effectively and efficiently train DDPG navigation policies and achieve significantly better performance than state-of-the-art baseline models.
KW - Deep deterministic policy gradient (DDPG)
KW - REINFORCE
KW - deep reinforcement learning (DRL)
KW - robot navigation
UR - http://www.scopus.com/inward/record.url?scp=85099208115&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2020.2977924
DO - 10.1109/TNNLS.2020.2977924
M3 - Article
C2 - 32203029
SN - 2162-237X
VL - 32
SP - 166
EP - 176
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 1
ER -