Differential Privacy in Decentralized Learning and AI

The field of differential privacy is moving towards more innovative and adaptive approaches to protect sensitive data in decentralized learning and AI applications. Researchers are exploring new methods to dynamically adjust noise levels and learning rates to achieve better privacy-utility trade-offs. Decentralized learning algorithms are being developed to provide node-level personalized privacy guarantees, and adaptive differential privacy mechanisms are being proposed to address the challenges of privacy-preserving decentralized learning. Additionally, there is a growing interest in applying differential privacy to AI-based sensing and wireless edge devices to protect sensitive personal data. Notable papers in this area include: ALPINE, which proposes a lightweight and adaptive framework for dynamic edge crowdsensing, ADP-VRSGP, which introduces a novel approach to decentralized learning with adaptive differential privacy, Unified Privacy Guarantees for Decentralized Learning via Matrix Factorization, which provides a principled way to develop new decentralized learning algorithms with tighter privacy accounting.

Sources

VaultGemma: A Differentially Private Gemma Model

ALPINE: A Lightweight and Adaptive Privacy-Decision Agent Framework for Dynamic Edge Crowdsensing

Unified Privacy Guarantees for Decentralized Learning via Matrix Factorization

ADP-VRSGP: Decentralized Learning with Adaptive Differential Privacy via Variance-Reduced Stochastic Gradient Push

Adversary-Aware Private Inference over Wireless Channels

On Optimal Hyperparameters for Differentially Private Deep Transfer Learning

Built with on top of