Advancements in Zeroth-Order Optimization and Sharpness-Aware Learning

The field of optimization is moving towards more efficient and effective methods, particularly in the areas of zeroth-order optimization and sharpness-aware learning. Recent developments have focused on improving the accuracy and convergence of zeroth-order optimization methods, which are useful when gradients are unavailable or expensive to compute. Sharpness-aware learning has also seen significant advancements, with new methods being proposed to improve the generalization properties of models. Notably, the connection between zeroth-order optimization and sharpness-aware learning is being explored, leading to the development of new algorithms and objectives that can provide better generalization and convergence. Noteworthy papers include: Zeroth-Order Sharpness-Aware Learning with Exponential Tilting, which proposes a new objective that connects zeroth-order optimization with sharpness-aware minimization. SAMOSA: Sharpness Aware Minimization for Open Set Active learning, which achieves up to 3% accuracy improvement over the state of the art across several datasets. On the Optimal Construction of Unbiased Gradient Estimators for Zeroth-Order Optimization, which proposes a novel family of unbiased gradient estimators that eliminate bias while maintaining favorable variance. Revisiting Zeroth-Order Optimization: Minimum-Variance Two-Point Estimators and Directionally Aligned Perturbations, which explores the potential advantages of directional alignment in perturbations.

Sources

Zeroth-Order Sharpness-Aware Learning with Exponential Tilting

SAMOSA: Sharpness Aware Minimization for Open Set Active learning

On the Optimal Construction of Unbiased Gradient Estimators for Zeroth-Order Optimization

Revisiting Zeroth-Order Optimization: Minimum-Variance Two-Point Estimators and Directionally Aligned Perturbations

Built with on top of