Advances in Robust Optimization and Adversarial Machine Learning

The field of machine learning is moving towards developing more robust and reliable models, with a focus on optimizing for flat minima and improving performance under adversarial attacks. Recent developments have shown that zeroth-order optimization methods can be used to converge to flat minima, which is a desirable property in many applications. Additionally, there is a growing interest in visualizing and understanding the risk landscape of performative prediction models, which is crucial for developing more effective and robust models. Adversarial training is also being explored, with a focus on developing surrogate risk bounds that can quantify the convergence rate of adversarial classification risk. Furthermore, novel survival models are being proposed that can handle censored data without parametric assumptions, using imprecise probability theory and attention mechanisms. Noteworthy papers include:

  • Zeroth-Order Optimization Finds Flat Minima, which provides convergence rates for zeroth-order optimization to approximate flat minima.
  • The Decoupled Risk Landscape in Performative Prediction, which introduces a novel setting for extended performative prediction and proposes new properties of interest points.
  • Adversarial Surrogate Risk Bounds for Binary Classification, which provides surrogate risk bounds that quantify the convergence rate of adversarial classification risk.
  • Survival Analysis as Imprecise Classification with Trainable Kernels, which introduces novel survival models that can handle censored data without parametric assumptions.
  • Lattice Climber Attack, which introduces a new attack for randomized mixtures of classifiers with theoretical guarantees.

Sources

Zeroth-Order Optimization Finds Flat Minima

The Decoupled Risk Landscape in Performative Prediction

Adversarial Surrogate Risk Bounds for Binary Classification

Survival Analysis as Imprecise Classification with Trainable Kernels

Lattice Climber Attack: Adversarial attacks for randomized mixtures of classifiers

Built with on top of