Advances in Private Learning and Optimization

The field of private learning and optimization is moving towards developing more efficient and effective algorithms that can handle complex datasets and scenarios. Recent research has focused on improving the accuracy and robustness of differentially private stochastic gradient descent (DP-SGD) and adaptive optimizers. Notably, innovations in learning rate scheduling, feature learning, and population size reduction have led to significant advancements in the field. Some papers are particularly noteworthy, including one that proposes a learning-rate-aware factorization to improve accuracy in private training, and another that presents a theoretical framework to analyze private training through a feature learning perspective. Additionally, a new variant of Differential Evolution (DE) has been proposed, which demonstrates top-tier performance across multiple benchmark suites. Other notable papers include one that introduces a novel optimizer with continuously tunable adaptivity, and another that proposes a memory-efficient and sparsity-aware adaptive DP optimizer. These advancements have the potential to significantly impact the field of private learning and optimization, enabling more efficient and effective training of machine learning models.

Sources

Learning Rate Scheduling with Matrix Factorization for Private Training

Understanding Private Learning From Feature Perspective

Robust Differential Evolution via Nonlinear Population Size Reduction and Adaptive Restart: The ARRDE Algorithm

HVAdam: A Full-Dimension Adaptive Optimizer

DP-MicroAdam: Private and Frugal Algorithm for Training and Fine-tuning

Adam Simplified: Bias Correction Simplified

Gradient Descent Algorithm Survey

Built with on top of