Advances in Federated Learning Security and Fairness

The field of federated learning is moving towards addressing the challenges of security and fairness in decentralized systems. Researchers are focusing on developing novel aggregation strategies and frameworks that can enhance the resilience and privacy guarantees of federated learning systems. One of the key innovative directions is the development of harm-centered frameworks that link fairness definitions to concrete risks and stakeholder vulnerabilities. Another important area of research is the measurement of participant contributions in decentralized federated learning, which is crucial for incentivizing clients and ensuring transparency. Notable papers include:

  • Average-rKrum, a robust aggregation strategy that enhances resilience and privacy guarantees of FL systems.
  • A harm-centered framework that links fairness definitions to concrete risks and stakeholder vulnerabilities, proposing a more holistic approach to fairness research in FL.
  • A decentralized federated learning framework that uses validation loss to guide model sharing and correct local training, demonstrating improved accuracy and convergence speed.
  • Novel methodologies for measuring participant contributions in DFL, including DFL-Shapley and DFL-MR, which provide a valid ground-truth metric and a computable approximation for estimating overall contributions.

Sources

Secure and Private Federated Learning: Achieving Adversarial Resilience through Robust Aggregation

Fairness in Federated Learning: Fairness for Whom?

Loss-Guided Model Sharing and Local Learning Correction in Decentralized Federated Learning for Crop Disease Classification

Measuring Participant Contributions in Decentralized Federated Learning

Built with on top of