Continual lifelong learning in neural systems: overcoming catastrophic forgetting and transferring knowledge for future learning

Owen He

Research output: ThesisThesis fully internal (DIV)

317 Downloads (Pure)


Intelligent agents are supposed to learn diverse skills over their lifetime. However, when trained on a sequence of different tasks, currently the most popular neural network-based AI systems often suffer from the catastrophic forgetting problem, which means learning new knowledge leads to dramatic forgetting of the old knowledge. We use mathematical tools such as conceptors, Bayesian learning and information theory to formally study this problem and propose new theories and solutions to overcome it such that existing knowledge in AI agents can not only be preserved but also be used to accelerate learning of new tasks in the future. The proposed methods include ways to identify, protect and refresh existing knowledge in neural systems, and algorithms to transfer knowledge from one system to another. We demonstrate the effectiveness of these methods in a wide range of applications such as image recognition, adversarial games and neuromorphic engineering.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • University of Groningen
  • Jaeger, Herbert, Supervisor
  • Sabatelli, Matthia, Co-supervisor
Award date4-May-2023
Place of Publication[Groningen]
Publication statusPublished - 2023

Cite this