Incremental Learning of Planning Actions in Model-Based Reinforcement Learning

Jun Hao Alvin Ng, Ronald P. A. Petrick

Research output: Chapter in Book/Report/Conference proceedingConference contribution

11 Citations (Scopus)

Abstract

The soundness and optimality of a plan depends on the correctness of the domain model. Specifying complete domain models can be difficult when interactions between an agent and its environment are complex. We propose a model-based reinforcement learning (MBRL) approach to solve planning problems with unknown models. The model is learned incrementally over episodes using only experiences from the current episode which suits non-stationary environments. We introduce the novel concept of reliability as an intrinsic motivation for MBRL, and a method to learn from failure to prevent repeated instances of similar failures. Our motivation is to improve the learning efficiency and goal-directedness of MBRL. We evaluate our work with experimental results for three planning domains.

Original languageEnglish
Title of host publicationProceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
EditorsSarit Kraus
Pages3195-3201
Number of pages7
ISBN (Electronic)9780999241141
DOIs
Publication statusPublished - Aug 2019
Event28th International Joint Conference on Artificial Intelligence 2019 - Macao, China
Duration: 10 Aug 201916 Aug 2019
https://ijcai19.org/

Conference

Conference28th International Joint Conference on Artificial Intelligence 2019
Abbreviated titleIJCAI 2019
Country/TerritoryChina
CityMacao
Period10/08/1916/08/19
Internet address

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Incremental Learning of Planning Actions in Model-Based Reinforcement Learning'. Together they form a unique fingerprint.

Cite this