Continual Learning Advances

The field of continual learning is moving towards innovative solutions that address the stability-plasticity dilemma, with a focus on developing frameworks that enable neural networks to learn and adapt incrementally. Recent research has emphasized the importance of architectural perspectives, progressive neural collapse, and dual-adapter architectures in achieving this goal. Additionally, there is a growing interest in rethinking the role of pre-trained models and foundation models in continual learning, with approaches such as adapting pre-trained models before the core continual learning process and leveraging neural network reprogrammability. Noteworthy papers include Rethinking Continual Learning with Progressive Neural Collapse, which introduces a novel framework that completely removes the need for a fixed global ETF in CL, and CL-LoRA, which proposes a dual-adapter architecture that combines task-shared and task-specific adapters to learn cross-task knowledge and capture unique features of each new task.

Sources

Rethinking Continual Learning with Progressive Neural Collapse

CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning

EWGN: Elastic Weight Generation and Context Switching in Deep Learning

PAID: Pairwise Angular-Invariant Decomposition for Continual Test-Time Adaptation

The Future of Continual Learning in the Era of Foundation Models: Three Key Directions

Rethinking the Stability-Plasticity Trade-off in Continual Learning from an Architectural Perspective

Adapt before Continual Learning

Neural Network Reprogrammability: A Unified Theme on Model Reprogramming, Prompt Tuning, and Prompt Instruction

Built with on top of