Advances in Submodular Optimization and Online Learning

The field of optimization is witnessing significant developments, with a focus on submodular optimization and online learning. Researchers are exploring new approaches to handle noisy or uncertain data, and developing algorithms that can adapt to changing constraints and objectives. One notable direction is the development of meta-algorithms that can transform existing optimization methods to handle noisy or uncertain data, while retaining their performance guarantees. Another area of interest is the study of online optimization problems, where algorithms must make decisions in real-time without prior knowledge of the future. Noteworthy papers include:

  • A Unified Approach to Submodular Maximization Under Noise, which presents a meta-algorithm for submodular maximization under noise.
  • Online Optimization for Offline Safe Reinforcement Learning, which proposes a novel approach to offline safe reinforcement learning by combining offline RL with online optimization algorithms.
  • Optimal Anytime Algorithms for Online Convex Optimization with Adversarial Constraints, which develops an anytime online algorithm for online convex optimization with adversarial constraints.

Sources

A Unified Approach to Submodular Maximization Under Noise

Scale-robust Auctions

Online Optimization for Offline Safe Reinforcement Learning

Optimal Anytime Algorithms for Online Convex Optimization with Adversarial Constraints

Learning-Augmented Online Bidding in Stochastic Settings

NP-Hardness of Approximating Nash Social Welfare with Supermodular Valuations

Built with on top of