The field of domain adaptation is moving towards more efficient and effective methods for adapting models to new, unseen domains. Recent developments have focused on improving the ability of models to generalize across evolving and revisited domains, as well as adapting to new domains with limited data. Notable advancements include the development of dual-teacher frameworks, modality-collaborative low-rank decomposers, and collaborative learning with multiple foundation models. These innovations have shown significant improvements over existing methods in various benchmark settings. Noteworthy papers include:
- SloMo-Fast, which proposes a source-free, dual-teacher continual test-time adaptation framework that exhibits enhanced adaptability and generalization.
- Modality-Collaborative Low-Rank Decomposers, which introduces a novel framework for few-shot video domain adaptation that decomposes modality-unique and modality-shared features.
- Collaborative Learning with Multiple Foundation Models, which proposes a framework that jointly leverages two different foundation models to capture both global semantics and local contextual cues.
- DAPointMamba, which presents a novel framework for domain adaptive point cloud completion that exhibits strong adaptability across domains.