Federated Learning Advancements

The field of federated learning is moving towards addressing key challenges in privacy, efficiency, and robustness. Researchers are exploring innovative solutions to protect sensitive data, improve computational resource utilization, and ensure reliable model performance. Notable developments include the use of adaptive privacy budgets, hierarchical asynchronous mechanisms, and generative models to enhance privacy and efficiency. Additionally, there is a growing focus on developing methods for robust knowledge removal and provable unlearning in federated learning settings. Noteworthy papers include:

  • Evaluation of Differential Privacy Mechanisms on Federated Learning, which introduces an adaptive clipping approach to maintain model accuracy while preserving privacy.
  • PubSub-VFL, which proposes a novel paradigm for two-party collaborative learning with high computational efficiency.
  • Federated Conditional Conformal Prediction via Generative Models, which aims to achieve conditional coverage that adapts to local data heterogeneity.
  • Provable Unlearning with Gradient Ascent on Two-Layer ReLU Neural Networks, which provides a theoretical analysis of a simple method for removing specific data from trained models.

Sources

Evaluation of Differential Privacy Mechanisms on Federated Learning

PubSub-VFL: Towards Efficient Two-Party Split Learning in Heterogeneous Environments via Publisher/Subscriber Architecture

Local Differential Privacy for Federated Learning with Fixed Memory Usage and Per-Client Privacy

Federated Conditional Conformal Prediction via Generative Models

Towards Robust Knowledge Removal in Federated Learning with High Data Heterogeneity

Provable Unlearning with Gradient Ascent on Two-Layer ReLU Neural Networks

Built with on top of