Responsible Machine Learning and Privacy

The field of machine learning is moving towards a more responsible and privacy-focused approach. Researchers are exploring ways to embed fairness, transparency, and privacy considerations directly into the learning process, rather than treating them as aftermarket add-ons. This includes the use of techniques such as differential privacy, federated learning, and zero-knowledge proofs to protect sensitive information and prevent harm. Noteworthy papers in this area include:

  • A Scalable System to Prove Machine Learning Fairness in Zero-Knowledge, which proposes a system for proving machine learning fairness in zero-knowledge.
  • Toward Fair Federated Learning under Demographic Disparities and Data Imbalance, which introduces a framework-agnostic method for fair federated learning.
  • A Federated Random Forest Solution for Secure Distributed Machine Learning, which presents a federated learning framework for Random Forest classifiers that preserves data privacy.

Sources

Crowding Out The Noise: Algorithmic Collective Action Under Differential Privacy

Mixed-Integer Optimization for Responsible Machine Learning

Federated Learning with LoRA Optimized DeiT and Multiscale Patch Embedding for Secure Eye Disease Recognition

Privacy of Groups in Dense Street Imagery

Fair Play for Individuals, Foul Play for Groups? Auditing Anonymization's Impact on ML Fairness

A Scalable System to Prove Machine Learning Fairness in Zero-Knowledge

A Federated Random Forest Solution for Secure Distributed Machine Learning

Toward Fair Federated Learning under Demographic Disparities and Data Imbalance

One For All: Formally Verifying Protocols which use Aggregate Signatures (extended version)

Built with on top of