Continual Learning for Efficient On-Device Adaptation

The field of continual learning is moving towards enabling efficient on-device adaptation, with a focus on developing methods that can learn from streaming data without requiring large amounts of memory or computational resources. Recent advances have led to the development of innovative approaches, such as dynamic subnetwork adaptation, zeroth-order optimization, and null space adaptation, which have shown promising results in mitigating catastrophic forgetting and improving model performance. Notably, papers such as MeDyate and NuSA-CL have demonstrated state-of-the-art performance in memory-constrained settings, while others like PLAN and COLA have introduced novel frameworks for proactive low-rank allocation and autoencoder-based retrieval of adapters. These developments have significant implications for real-world applications, particularly in areas where on-device learning is crucial. Some noteworthy papers in this regard include MeDyate, which achieves state-of-the-art performance under extreme memory constraints, and NuSA-CL, which enables memory-free continual learning for zero-shot vision-language models.

Sources

Memory Constrained Dynamic Subnetwork Update for Transfer Learning

More Than Memory Savings: Zeroth-Order Optimization Mitigates Forgetting in Continual Learning

Memory-Free Continual Learning with Null Space Adaptation for Zero-Shot Vision-Language Models

PLAN: Proactive Low-Rank Allocation for Continual Learning

Buffer layers for Test-Time Adaptation

Adaptive Data Selection for Multi-Layer Perceptron Training: A Sub-linear Value-Driven Method

Randomized Neural Network with Adaptive Forward Regularization for Online Task-free Class Incremental Learning

COLA: Continual Learning via Autoencoder Retrieval of Adapters

Knowledge-guided Continual Learning for Behavioral Analytics Systems

Continual Low-Rank Adapters for LLM-based Generative Recommender Systems

Model Inversion with Layer-Specific Modeling and Alignment for Data-Free Continual Learning

Built with on top of