Continual Learning Advances

The field of continual learning is moving towards more efficient and effective methods for adapting to new data and tasks. Researchers are exploring new approaches to mitigate catastrophic forgetting and improve the retention of previously learned knowledge. One notable direction is the use of prompt-based methods, which have shown promise in incremental learning settings. Another area of focus is the development of more realistic benchmarks for evaluating incremental learning methods, such as those that capture domain shifts and class expansions. Noteworthy papers include: Towards Efficient Prompt-based Continual Learning in Distributed Medical AI, which proposes a prompt-based continual learning approach for medical AI applications, and RICO: Two Realistic Benchmarks and an In-Depth Analysis for Incremental Learning in Object Detection, which introduces two new benchmarks for evaluating incremental learning methods in object detection. These papers demonstrate significant advancements in the field and highlight the potential for continual learning to enable more efficient and effective machine learning systems.

Sources

Towards Efficient Prompt-based Continual Learning in Distributed Medical AI

Exploring the Tradeoff Between Diversity and Discrimination for Continuous Category Discovery

Index-Aligned Query Distillation for Transformer-based Incremental Object Detection

Multi-Level Knowledge Distillation and Dynamic Self-Supervised Learning for Continual Learning

SEDEG:Sequential Enhancement of Decoder and Encoder's Generality for Class Incremental Learning with Small Memory

Empirical Evidences for the Effects of Feature Diversity in Open Set Recognition and Continual Learning

Monte Carlo Functional Regularisation for Continual Learning

RICO: Two Realistic Benchmarks and an In-Depth Analysis for Incremental Learning in Object Detection

Incremental Object Detection with Prompt-based Methods

Built with on top of