Differentially Private Optimization Advances

The field of differentially private optimization is moving towards improving the efficiency and accuracy of private machine learning algorithms. Recent developments have focused on leveraging public data to guide private zeroth-order optimization, improving rank aggregation algorithms, and developing memory-efficient on-device fine-tuning methods. Notably, innovative approaches such as Coordinate Descent for network linearization and Backprop-Free Zeroth-Order Optimization have shown promising results.

Noteworthy papers include: Private Zeroth-Order Optimization with Public Data, which proposes a framework that achieves superior privacy/utility tradeoffs and offers significant runtime speedup. On-Device Fine-Tuning via Backprop-Free Zeroth-Order Optimization, which enables significantly larger models to fit within on-chip memory. Coordinate Descent for Network Linearization, which yields a sparse solution by design and demonstrates state-of-the-art performance on common benchmarks.

Sources

Private Zeroth-Order Optimization with Public Data

Improved Differentially Private Algorithms for Rank Aggregation

On-Device Fine-Tuning via Backprop-Free Zeroth-Order Optimization

Coordinate Descent for Network Linearization

On the Gradient Complexity of Private Optimization with Private Oracles

Almost Sure Convergence Analysis of Differentially Private Stochastic Gradient Methods

Built with on top of