The field of machine learning is moving towards developing more robust models that can provide reliable predictions in the face of adversarial actions and concept drift. Researchers are exploring new frameworks and techniques to evaluate and improve the robustness of models, such as conformal prediction and in-context adaptation. These approaches have shown promising results in various applications, including indoor positioning, query optimization, and database operations. Notably, the use of conformal prediction has enabled the development of models with guaranteed correctness coverage and statistical guarantees, while in-context adaptation has allowed for efficient adaptation to shifting concepts in dynamic environments. Overall, the field is advancing towards more reliable and efficient machine learning models that can handle complex real-world scenarios. Noteworthy papers include:
- Conformal Prediction for Indoor Positioning with Correctness Coverage Guarantees, which applied conformal prediction to deep learning-based indoor positioning and achieved high accuracy and generalization capability.
- In-Context Adaptation to Concept Drift for Learned Database Operations, which proposed an online adaptation framework called FLAIR that delivers predictions aligned with the current concept and eliminates the need for runtime parameter optimization.