The field of medical vision-language models is rapidly evolving, with a focus on developing methods that can efficiently learn from new data while preserving previous knowledge. Recent research has highlighted the importance of continual learning and unlearning in these models, particularly in clinical settings where new imaging protocols and diagnostic capabilities are constantly being introduced. One of the key challenges in this area is catastrophic forgetting, where models forget previously learned information when adapted to new tasks. To address this, researchers have proposed various methods, including dual-phase architectures, prompt-aware adaptive elastic weight consolidation, and geometrically principled frameworks for debiasing. These approaches have shown promising results in mitigating forgetting and improving performance retention in medical vision-language models. Noteworthy papers in this area include MedPEFT-CL, which proposes a parameter-efficient continual learning framework for medical vision-language tasks, and SineProject, which introduces a simple method for stabilizing vision language alignment during unlearning.Overall, the field is moving towards developing more efficient, effective, and robust methods for continual learning and unlearning in medical vision-language models, with a focus on preserving previous knowledge and adapting to new clinical requirements.
Advances in Continual Learning and Unlearning for Medical Vision-Language Models
Sources
MedPEFT-CL: Dual-Phase Parameter-Efficient Continual Learning with Medical Semantic Adapter and Bidirectional Memory Consolidation
Bias Is a Subspace, Not a Coordinate: A Geometric Rethinking of Post-hoc Debiasing in Vision-Language Models
Vision Token Masking Alone Cannot Prevent PHI Leakage in Medical Document OCR: A Systematic Evaluation