Advances in Federated Learning for Privacy-Preserving AI

The field of federated learning is moving towards developing more robust and privacy-preserving methods for collaborative model training. Recent research has focused on addressing challenges such as data heterogeneity, client drift, and adversarial attacks. Notably, advancements in differential privacy, robust aggregation methods, and personalized federated learning have shown promising results in improving model performance and protecting sensitive data. Furthermore, the development of novel frameworks and algorithms, such as DP-RTFL, CADRE, and FedAux, has enabled more efficient and effective federated learning in various applications, including healthcare and finance. Additionally, research on federated learning for specific tasks, such as GI endoscopy image classification and object-centric representation learning, has demonstrated the potential of federated learning in real-world scenarios. Overall, the field is advancing towards more secure, efficient, and personalized federated learning methods. Noteworthy papers include DP-RTFL, which introduces a differentially private resilient temporal federated learning framework, and FedAux, which proposes a personalized subgraph federated learning approach.

Sources

DP-RTFL: Differentially Private Resilient Temporal Federated Learning for Trustworthy AI in Regulated Industries

CADRE: Customizable Assurance of Data Readiness in Privacy-Preserving Federated Learning

Personalized Subgraph Federated Learning with Differentiable Auxiliary Projections

Adaptive Deadline and Batch Layered Synchronized Federated Learning

Federated Foundation Model for GI Endoscopy Images

Towards Unified Modeling in Federated Multi-Task Learning via Subspace Decoupling

Robust Federated Learning against Model Perturbation in Edge Networks

Lightweight Relational Embedding in Task-Interpolated Few-Shot Networks for Enhanced Gastrointestinal Disease Classification

ByzFL: Research Framework for Robust Federated Learning

Coded Robust Aggregation for Distributed Learning under Byzantine Attacks

Robust Federated Learning against Noisy Clients via Masked Optimization

Mitigating Data Poisoning Attacks to Local Differential Privacy

Enhancing Convergence, Privacy and Fairness for Wireless Personalized Federated Learning: Quantization-Assisted Min-Max Fair Scheduling

Privacy-Preserving Federated Convex Optimization: Balancing Partial-Participation and Efficiency via Noise Cancellation

Poster: FedBlockParadox -- A Framework for Simulating and Securing Decentralized Federated Learning

Overcoming Challenges of Partial Client Participation in Federated Learning : A Comprehensive Review

Sociodynamics-inspired Adaptive Coalition and Client Selection in Federated Learning

FORLA:Federated Object-centric Representation Learning with Slot Attention

GCFL: A Gradient Correction-based Federated Learning Framework for Privacy-preserving CPSS

Dropout-Robust Mechanisms for Differentially Private and Fully Decentralized Mean Estimation

FedFACT: A Provable Framework for Controllable Group-Fairness Calibration in Federated Learning

HtFLlib: A Comprehensive Heterogeneous Federated Learning Library and Benchmark

Optimal Transport-based Domain Alignment as a Preprocessing Step for Federated Learning

Communication Efficient Adaptive Model-Driven Quantum Federated Learning

Federated Learning Assisted Edge Caching Scheme Based on Lightweight Architecture DDPM

FedAPM: Federated Learning via ADMM with Partial Model Personalization

Learning Theory of Decentralized Robust Kernel-Based Learning Algorithm

Built with on top of