Continual lifelong learning with neural networks: A review

German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, Stefan Wermter

Research output: Contribution to journalArticle

Abstract

Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational learning systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.
LanguageEnglish
Pages54-71
Number of pages18
JournalNeural Networks
Volume113
Early online date6 Feb 2019
DOIs
Publication statusPublished - May 2019

Fingerprint

Learning systems
Autonomous agents
Neural networks
Data storage equipment
Biological systems
Consolidation
Curricula
Plasticity
Animals
Robots
Processing
Deep neural networks

Keywords

  • Catastrophic forgetting
  • Continual learning
  • Developmental systems
  • Lifelong learning
  • Memory consolidation

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Artificial Intelligence

Cite this

Parisi, G. I., Kemker, R., Part, J. L., Kanan, C., & Wermter, S. (2019). Continual lifelong learning with neural networks: A review. Neural Networks, 113, 54-71. https://doi.org/10.1016/j.neunet.2019.01.012
Parisi, German I. ; Kemker, Ronald ; Part, Jose L. ; Kanan, Christopher ; Wermter, Stefan. / Continual lifelong learning with neural networks: A review. In: Neural Networks. 2019 ; Vol. 113. pp. 54-71.
@article{999d99c356304cf4bf8614a6ac304332,
title = "Continual lifelong learning with neural networks: A review",
abstract = "Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational learning systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.",
keywords = "Catastrophic forgetting, Continual learning, Developmental systems, Lifelong learning, Memory consolidation",
author = "Parisi, {German I.} and Ronald Kemker and Part, {Jose L.} and Christopher Kanan and Stefan Wermter",
year = "2019",
month = "5",
doi = "10.1016/j.neunet.2019.01.012",
language = "English",
volume = "113",
pages = "54--71",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier Limited",

}

Parisi, GI, Kemker, R, Part, JL, Kanan, C & Wermter, S 2019, 'Continual lifelong learning with neural networks: A review', Neural Networks, vol. 113, pp. 54-71. https://doi.org/10.1016/j.neunet.2019.01.012

Continual lifelong learning with neural networks: A review. / Parisi, German I.; Kemker, Ronald; Part, Jose L.; Kanan, Christopher; Wermter, Stefan.

In: Neural Networks, Vol. 113, 05.2019, p. 54-71.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Continual lifelong learning with neural networks: A review

AU - Parisi, German I.

AU - Kemker, Ronald

AU - Part, Jose L.

AU - Kanan, Christopher

AU - Wermter, Stefan

PY - 2019/5

Y1 - 2019/5

N2 - Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational learning systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.

AB - Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational learning systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.

KW - Catastrophic forgetting

KW - Continual learning

KW - Developmental systems

KW - Lifelong learning

KW - Memory consolidation

UR - http://www.scopus.com/inward/record.url?scp=85061575306&partnerID=8YFLogxK

U2 - 10.1016/j.neunet.2019.01.012

DO - 10.1016/j.neunet.2019.01.012

M3 - Article

VL - 113

SP - 54

EP - 71

JO - Neural Networks

T2 - Neural Networks

JF - Neural Networks

SN - 0893-6080

ER -