The field of continual learning is moving towards developing more efficient and effective methods to prevent catastrophic forgetting in neural networks. Researchers are exploring innovative approaches, such as factorization and centralization, to learn and consolidate knowledge in a more human-brain-like manner. Another direction is the study of the impact of model scale and feature learning on continual learning, with findings suggesting that increasing model width can be beneficial when it reduces feature learning. The use of spiking neural networks and columnar organization is also being investigated for continual learning. Furthermore, there is a growing interest in addressing temporal discontinuity in graph streams and developing batch-insensitive dynamic graph neural networks. Additionally, researchers are working on minimizing the memory footprint of memory-based continual learning algorithms using lightweight generators. Notable papers include:
- The Importance of Being Lazy: Scaling Limits of Continual Learning, which provides a unified perspective on the role of scale and feature learning in continual learning.
- Leveraging Lightweight Generators for Memory Efficient Continual Learning, which proposes the usage of lightweight generators to enhance existing continual learning methods.