The field of optimization and learning is moving towards developing more efficient and scalable methods for complex problems. Researchers are focusing on improving the accuracy and speed of existing algorithms, as well as exploring new approaches that can handle large datasets and high-dimensional functions. Notably, there is a growing interest in multi-fidelity methods, which leverage low-fidelity information to improve the performance of high-fidelity models. Additionally, decision-focused learning is becoming increasingly popular, as it allows for the optimization of decision-making processes in complex systems. Noteworthy papers include:
- Closing the Approximation Gap of Partial AUC Optimization, which presents two novel formulations for partial AUC optimization, achieving a significant improvement in scalability and accuracy.
- Approximate Optimal Active Learning of Decision Trees, which proposes a symbolic method for active learning of decision trees, enabling near-optimal query selection without full model enumeration.
- Scalable Decision Focused Learning via Online Trainable Surrogates, which introduces an acceleration method based on unbiased estimators, reducing the risk of spurious local optima and improving the scalability of decision-focused learning.