Continual Learning Advancements

The field of continual learning is rapidly advancing with a focus on developing innovative methods to adapt to dynamic environments and mitigate catastrophic forgetting. Recent studies have explored novel approaches to balance stability and plasticity, enabling models to learn multiple tasks in sequence while preserving previously acquired knowledge. Notable advancements include the use of meta-knowledge distillation, gradient space splitting, and dynamic dual buffer strategies to improve the efficiency and effectiveness of continual learning. Furthermore, research has highlighted the importance of memorization in incremental learning scenarios and the need for scalable and robust methods to address the challenges of real-world deployment.

Noteworthy papers include: Model-Free Graph Data Selection under Distribution Shift, which proposes a novel model-free framework for graph domain adaptation. Towards Heterogeneous Continual Graph Learning via Meta-knowledge Distillation, which introduces a meta-learning based knowledge distillation framework for continual learning on heterogeneous graphs. SplitLoRA, which proposes a novel approach for continual learning based on Low-Rank Adaptation and gradient space splitting. LADA, which introduces a scalable label-specific CLIP adapter for continual learning. Frugal Incremental Generative Modeling using Variational Autoencoders, which devises a novel replay-free incremental learning model based on Variational Autoencoders.

Sources

Model-Free Graph Data Selection under Distribution Shift

Towards Heterogeneous Continual Graph Learning via Meta-knowledge Distillation

PoseBH: Prototypical Multi-Dataset Training Beyond Human Pose Estimation

What is the role of memorization in Continual Learning?

Evolving Machine Learning: A Survey

Dynamic Dual Buffer with Divide-and-Conquer Strategy for Online Continual Learning

Continual Learning Beyond Experience Rehearsal and Full Model Surrogates

SplitLoRA: Balancing Stability and Plasticity in Continual Learning Through Gradient Space Splitting

Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning

Frugal Incremental Generative Modeling using Variational Autoencoders

One Rank at a Time: Cascading Error Dynamics in Sequential Learning

LADA: Scalable Label-Specific CLIP Adapter for Continual Learning

DeepChest: Dynamic Gradient-Free Task Weighting for Effective Multi-Task Learning in Chest X-ray Classification

Built with on top of