Advances in Adaptive Learning and Uncertainty Quantification

The field of machine learning is moving towards developing more adaptive and efficient algorithms that can handle real-world constraints such as limited data, latency, and memory constraints. Researchers are exploring innovative approaches to active learning, uncertainty quantification, and online learning to improve the performance and reliability of models in various applications, including vision-language models, reinforcement learning, and surgical video analysis. Notable papers in this area include TAPS, which proposes a novel Test-Time Active Learning framework for adapting to new data during inference, and PERRY, which introduces a conformal prediction method for constructing valid confidence intervals for off-policy evaluation in reinforcement learning. Other notable papers include Awesome-OL, an extensible toolkit for online learning, and Approximating Full Conformal Prediction, which proposes a method for approximating full conformal prediction for neural network regression without held-out data.

Sources

TAPS : Frustratingly Simple Test Time Active Learning for VLMs

PERRY: Policy Evaluation with Confidence Intervals using Auxiliary Data

Awesome-OL: An Extensible Toolkit for Online Learning

Data-Efficient Prediction-Powered Calibration via Cross-Validation

Approximating Full Conformal Prediction for Neural Network Regression with Gauss-Newton Influence

Active Monitoring with RTLola: A Specification-Guided Scheduling Approach

Optimizing Active Learning in Vision-Language Models via Parameter-Efficient Uncertainty Calibration

StepAL: Step-aware Active Learning for Cataract Surgical Videos

Built with on top of