The field of federated learning is moving towards addressing key challenges in privacy, efficiency, and robustness. Researchers are exploring innovative solutions to protect sensitive data, improve computational resource utilization, and ensure reliable model performance. Notable developments include the use of adaptive privacy budgets, hierarchical asynchronous mechanisms, and generative models to enhance privacy and efficiency. Additionally, there is a growing focus on developing methods for robust knowledge removal and provable unlearning in federated learning settings. Noteworthy papers include:
- Evaluation of Differential Privacy Mechanisms on Federated Learning, which introduces an adaptive clipping approach to maintain model accuracy while preserving privacy.
- PubSub-VFL, which proposes a novel paradigm for two-party collaborative learning with high computational efficiency.
- Federated Conditional Conformal Prediction via Generative Models, which aims to achieve conditional coverage that adapts to local data heterogeneity.
- Provable Unlearning with Gradient Ascent on Two-Layer ReLU Neural Networks, which provides a theoretical analysis of a simple method for removing specific data from trained models.