The field of continual learning is rapidly advancing with a focus on developing innovative methods to adapt to dynamic environments and mitigate catastrophic forgetting. Recent studies have explored novel approaches to balance stability and plasticity, enabling models to learn multiple tasks in sequence while preserving previously acquired knowledge. Notable advancements include the use of meta-knowledge distillation, gradient space splitting, and dynamic dual buffer strategies to improve the efficiency and effectiveness of continual learning. Furthermore, research has highlighted the importance of memorization in incremental learning scenarios and the need for scalable and robust methods to address the challenges of real-world deployment.
Noteworthy papers include: Model-Free Graph Data Selection under Distribution Shift, which proposes a novel model-free framework for graph domain adaptation. Towards Heterogeneous Continual Graph Learning via Meta-knowledge Distillation, which introduces a meta-learning based knowledge distillation framework for continual learning on heterogeneous graphs. SplitLoRA, which proposes a novel approach for continual learning based on Low-Rank Adaptation and gradient space splitting. LADA, which introduces a scalable label-specific CLIP adapter for continual learning. Frugal Incremental Generative Modeling using Variational Autoencoders, which devises a novel replay-free incremental learning model based on Variational Autoencoders.