The field of federated learning is moving towards addressing the challenges of heterogeneous data distributions across clients. Recent developments focus on improving the performance and fairness of federated learning models in the presence of non-IID data. Notably, researchers are exploring novel approaches to mitigate the impact of data heterogeneity, such as using hypernetworks, sheaf collaboration, and dimension-decomposed learning. These innovative methods aim to enhance the personalization and generalization of federated learning models, making them more suitable for real-world applications. Some noteworthy papers in this area include FedUHD, which proposes a federated learning framework based on hyperdimensional computing, and FedSheafHN, which introduces a sheaf collaboration mechanism for personalized subgraph federated learning. These advancements have the potential to significantly improve the effectiveness of federated learning in various domains, including computer vision, natural language processing, and graph learning.
Advances in Federated Learning for Heterogeneous Data
Sources
DFed-SST: Building Semantic- and Structure-aware Topologies for Decentralized Federated Graph Learning
Fed-Meta-Align: A Similarity-Aware Aggregation and Personalization Pipeline for Federated TinyML on Heterogeneous Data
Breaking the Aggregation Bottleneck in Federated Recommendation: A Personalized Model Merging Approach
Deploying Models to Non-participating Clients in Federated Learning without Fine-tuning: A Hypernetwork-based Approach
Dextr: Zero-Shot Neural Architecture Search with Singular Value Decomposition and Extrinsic Curvature
Comparison of derivative-free and gradient-based minimization for multi-objective compositional design of shape memory alloys
Dimension-Decomposed Learning for Quadrotor Geometric Attitude Control with Almost Global Exponential Convergence on SO(3)