The field of continual learning is moving towards developing scalable and reversible model composition methods. Researchers are focusing on creating frameworks that enable interference-free and reversible composition of fine-tuned models, allowing for the integration of new models while preserving performance across tasks. This direction is driven by the need for modular and compliant AI system design, particularly in applications where models must be continually updated and composed. Noteworthy papers in this area include: Modular Delta Merging with Orthogonal Constraints, which proposes a novel framework for scalable and reversible model composition. Progressive Homeostatic and Plastic Prompt Tuning achieves state-of-the-art performance in audio-visual multi-task incremental learning by introducing a three-stage prompt tuning method. RainbowPrompt proposes a novel prompt-evolving mechanism to adaptively aggregate base prompts, ensuring diversity and facilitating learning of new tasks.