The field of continual learning is moving towards enabling efficient on-device adaptation, with a focus on developing methods that can learn from streaming data without requiring large amounts of memory or computational resources. Recent advances have led to the development of innovative approaches, such as dynamic subnetwork adaptation, zeroth-order optimization, and null space adaptation, which have shown promising results in mitigating catastrophic forgetting and improving model performance. Notably, papers such as MeDyate and NuSA-CL have demonstrated state-of-the-art performance in memory-constrained settings, while others like PLAN and COLA have introduced novel frameworks for proactive low-rank allocation and autoencoder-based retrieval of adapters. These developments have significant implications for real-world applications, particularly in areas where on-device learning is crucial. Some noteworthy papers in this regard include MeDyate, which achieves state-of-the-art performance under extreme memory constraints, and NuSA-CL, which enables memory-free continual learning for zero-shot vision-language models.