The field of multimodal large language models is moving towards more efficient and effective methods for model merging, continual learning, and task adaptation. Researchers are exploring new techniques to address challenges such as expert uniformity, router rigidity, and catastrophic forgetting. Notable trends include the use of layer-wise task vector fusion, dynamic token-aware routing, and branch-based LoRA frameworks to improve model performance and reduce interference. Some noteworthy papers in this area include EvoMoE, which introduces a novel expert initialization strategy and dynamic routing mechanism, and BranchLoRA, which enhances multimodal continual instruction tuning with a flexible tuning-freezing mechanism. Additionally, StatsMerging offers a lightweight learning-based model merging method guided by weight distribution statistics, and Hierarchical-Task-Aware Multi-modal Mixture of Incremental LoRA Experts achieves task recognition by clustering visual-text embeddings.