Federated Learning for Enhanced Privacy and Efficiency

The field of federated learning is moving towards developing innovative solutions to enhance privacy and efficiency in various applications. Researchers are exploring new techniques to improve model aggregation, incentives, and robustness in federated learning systems. Notably, the use of metadata, dynamic pricing, and distributionally robust optimization are being investigated to address challenges in communication systems, online learning, and edge AI-generated content services. These advancements have the potential to revolutionize the way federated learning is applied in real-world scenarios, enabling more secure, efficient, and personalized solutions. Some noteworthy papers in this area include: DaringFed, which introduces a dynamic Bayesian persuasion pricing for online federated learning under two-sided incomplete information. Distributionally Robust Contract Theory for Edge AIGC Services, which proposes a novel contract theory to design robust reward schemes for edge AI-generated content services. RiM, which presents a personalized machine learning framework that leverages federated learning to enhance students' physical well-being.

Sources

FedAvgen: Metadata for Model Aggregation In Communication Systems

DaringFed: A Dynamic Bayesian Persuasion Pricing for Online Federated Learning under Two-sided Incomplete Information

RiM: Record, Improve and Maintain Physical Well-being using Federated Learning

Distributionally Robust Contract Theory for Edge AIGC Services in Teleoperation

Privacy-aware Berrut Approximated Coded Computing applied to general distributed learning

LECTOR: Summarizing E-book Reading Content for Personalized Student Support

Ranking-Based At-Risk Student Prediction Using Federated Learning and Differential Features

Built with on top of