Continual Learning and Model Composition

The field of continual learning is moving towards developing scalable and reversible model composition methods. Researchers are focusing on creating frameworks that enable interference-free and reversible composition of fine-tuned models, allowing for the integration of new models while preserving performance across tasks. This direction is driven by the need for modular and compliant AI system design, particularly in applications where models must be continually updated and composed. Noteworthy papers in this area include: Modular Delta Merging with Orthogonal Constraints, which proposes a novel framework for scalable and reversible model composition. Progressive Homeostatic and Plastic Prompt Tuning achieves state-of-the-art performance in audio-visual multi-task incremental learning by introducing a three-stage prompt tuning method. RainbowPrompt proposes a novel prompt-evolving mechanism to adaptively aggregate base prompts, ensuring diversity and facilitating learning of new tasks.

Sources

Modular Delta Merging with Orthogonal Constraints: A Scalable Framework for Continual and Reversible Model Composition

Progressive Homeostatic and Plastic Prompt Tuning for Audio-Visual Multi-Task Incremental Learning

RainbowPrompt: Diversity-Enhanced Prompt-Evolving for Continual Learning

Forgetting of task-specific knowledge in model merging-based continual learning

Built with on top of