Advances in Multi-Task Learning and Federated Learning

The field of machine learning is moving towards more efficient and scalable algorithms for multi-task learning and federated learning. Recent developments have focused on improving the performance of models in these settings, with a particular emphasis on addressing challenges such as negative transfer and task conflicts. Notable advancements include the development of personalized information surgery frameworks, multi-task multi-domain architectures, and cluster-based client selection methods. These innovations have shown significant improvements in performance and efficiency, with applications in areas such as recommender systems, ad ranking, and edge computing.

Some noteworthy papers in this area include: DRGrad, which proposes a personalized Direct Routing Gradient framework for multi-task learning, showing superior performance over competing state-of-the-art models. MTMD, which introduces a Multi-Task Multi-Domain architecture for unified ad lightweight ranking, achieving a 12% to 36% improvement in offline loss value and a 2% online reduction in cost per click. FedGTEA, which presents a novel framework for Federated Class Incremental Learning, capturing task-specific knowledge and model uncertainty in a scalable and communication-efficient manner. DOLFIN, which introduces a Distributed Online LoRA for Federated INcremental learning method, combining Vision Transformers with low-rank adapters to efficiently and stably learn new tasks in federated environments. CoLoR-GAN, which proposes a continual few-shot learning framework with low-rank adaptation in Generative Adversarial Networks, handling both few-shot and continual learning together and leveraging low-rank tensors to efficiently adapt the model to target tasks.

Sources

Direct Routing Gradient (DRGrad): A Personalized Information Surgery for Multi-Task Learning (MTL) Recommendations

MTMD: A Multi-Task Multi-Domain Framework for Unified Ad Lightweight Ranking at Pinterest

Structured Cooperative Multi-Agent Reinforcement Learning: a Bayesian Network Perspective

Diversity Augmentation of Dynamic User Preference Data for Boosting Personalized Text Summarizers

Multitask Learning with Learned Task Relationships

Quantum Annealing for Staff Scheduling in Educational Environments

FedGTEA: Federated Class-Incremental Learning with Gaussian Task Embedding and Alignment

Cluster-Based Client Selection for Dependent Multi-Task Federated Learning in Edge Computing

STT-GS: Sample-Then-Transmit Edge Gaussian Splatting with Joint Client Selection and Power Control

DOLFIN: Balancing Stability and Plasticity in Federated Continual Learning

CoLoR-GAN: Continual Few-Shot Learning with Low-Rank Adaptation in Generative Adversarial Networks

Weight Weaving: Parameter Pooling for Data-Free Model Merging

Towards Reversible Model Merging For Low-rank Weights

Purifying Task Vectors in Knowledge-Aware Subspace for Model Merging

Built with on top of