Overcoming Catastrophic Interference: Neuroscience-inspired Models for Continuous Learning in Neural Networks

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Catastrophic interference (a.k.a. catastrophic forgetting) occurs when neural networks suddenly lose previously learned knowledge while acquiring new information. However, the human and animal brain have evolved several mechanisms to solve this problem. In this study, We analyze a wide range of brain-inspired mitigation strategies, such as Elastic Weight Consolidation and Synaptic Intelligence, which aim to preserve important weights during training; rehearsal-based methods and generative replay, which focus on maintaining a representative subset of past data to mitigate forgetting—a concept closely aligned with the hippocampal mechanism of experience replay in biological memory systems; and architecture-based solutions that explore dynamic network structures and modular architectures capable of adapting over time without forgetting, mirroring the compartmentalized and interactive roles of the hippocampus and neocortex. Additionally, we explore the brain’s mechanisms—particularly the hippocampus and its role during sleep—in combating catastrophic interference through processes like synaptic homeostasis and memory consolidation during slow-wave and REM sleep. Our review provides a detailed comparative analysis of these brain-inspired methods in terms of effectiveness, scalability, and computational efficiency, highlighting their strengths, weaknesses, practical applications, limitations, and challenges, such as the trade-off between plasticity and stability and the computational cost of some approaches. Finally, we propose future research involving the creation of brain-inspired stronger deep-learning models that can learn continuously. This includes investigating new hybrid approaches, improving the scalability of current methods, and incorporating transfer learning to support knowledge retention in various tasks.

Article activity feed