Continual Learning and Semantic Segmentation Advancements

The field of continual learning and semantic segmentation is moving towards developing more effective methods for maintaining plasticity in neural networks and improving segmentation performance. Recent research has focused on reinitialization techniques, training-free methods, and adapting foundation models to downstream tasks. Notably, innovative approaches such as selective weight reinitialization, class purification frameworks, and two-stage architectures have shown promising results. These advancements have the potential to enhance the performance of semantic segmentation models and enable more efficient continual learning. Noteworthy papers include: Reinitializing weights vs units for maintaining plasticity in neural networks, which proposes a new algorithm for reinitializing the least useful weights in a network. Training-Free Class Purification for Open-Vocabulary Semantic Segmentation, which introduces a novel training-free class purification framework designed to address class redundancy and visual-language ambiguity. Decoupling Continual Semantic Segmentation, which introduces a two-stage framework for CSS that decouples class-aware detection from class-agnostic segmentation.

Sources

Reinitializing weights vs units for maintaining plasticity in neural networks

Training-Free Class Purification for Open-Vocabulary Semantic Segmentation

SAM2-UNeXT: An Improved High-Resolution Baseline for Adapting Foundation Models to Downstream Segmentation Tasks

Revisiting Continual Semantic Segmentation with Pre-trained Vision Models

Learning from Oblivion: Predicting Knowledge Overflowed Weights via Retrodiction of Forgetting

Decoupling Continual Semantic Segmentation

Built with on top of