Abstract
Intelligent agents are supposed to learn diverse skills over their lifetime. However, when trained on a sequence of different tasks, currently the most popular neural network-based AI systems often suffer from the catastrophic forgetting problem, which means learning new knowledge leads to dramatic forgetting of the old knowledge. We use mathematical tools such as conceptors, Bayesian learning and information theory to formally study this problem and propose new theories and solutions to overcome it such that existing knowledge in AI agents can not only be preserved but also be used to accelerate learning of new tasks in the future. The proposed methods include ways to identify, protect and refresh existing knowledge in neural systems, and algorithms to transfer knowledge from one system to another. We demonstrate the effectiveness of these methods in a wide range of applications such as image recognition, adversarial games and neuromorphic engineering.
Original language | English |
---|---|
Qualification | Doctor of Philosophy |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 4-May-2023 |
Place of Publication | [Groningen] |
Publisher | |
DOIs | |
Publication status | Published - 2023 |