The field of stochastic convex optimization is moving towards developing more robust and adaptive methods for handling unknown problem parameters. Researchers are exploring new strategies for reliable model selection, regularization, and gradient-based optimization to improve the sample complexity and generalization performance of stochastic optimization methods. Notable papers include:
- The Sample Complexity of Parameter-Free Stochastic Convex Optimization, which develops a reliable model selection method and a regularization-based method to adapt to unknown problem parameters.
- Generalization Bound of Gradient Flow through Training Trajectory and Data-dependent Kernel, which establishes a generalization bound for gradient flow that captures the entire training trajectory and adapts to both data and optimization dynamics.