The field of machine learning is moving towards more efficient and scalable algorithms for multi-task learning and federated learning. Recent developments have focused on improving the performance of models in these settings, with a particular emphasis on addressing challenges such as negative transfer and task conflicts. Notable advancements include the development of personalized information surgery frameworks, multi-task multi-domain architectures, and cluster-based client selection methods. These innovations have shown significant improvements in performance and efficiency, with applications in areas such as recommender systems, ad ranking, and edge computing.
Some noteworthy papers in this area include: DRGrad, which proposes a personalized Direct Routing Gradient framework for multi-task learning, showing superior performance over competing state-of-the-art models. MTMD, which introduces a Multi-Task Multi-Domain architecture for unified ad lightweight ranking, achieving a 12% to 36% improvement in offline loss value and a 2% online reduction in cost per click. FedGTEA, which presents a novel framework for Federated Class Incremental Learning, capturing task-specific knowledge and model uncertainty in a scalable and communication-efficient manner. DOLFIN, which introduces a Distributed Online LoRA for Federated INcremental learning method, combining Vision Transformers with low-rank adapters to efficiently and stably learn new tasks in federated environments. CoLoR-GAN, which proposes a continual few-shot learning framework with low-rank adaptation in Generative Adversarial Networks, handling both few-shot and continual learning together and leveraging low-rank tensors to efficiently adapt the model to target tasks.