Advances in Robustness and Reliability of Machine Learning Models

The field of machine learning is moving towards improving the robustness and reliability of models, particularly in high-stakes applications such as medical imaging. Researchers are focusing on developing methods to detect and mitigate out-of-distribution (OOD) samples, as well as addressing issues related to underspecification and spurious correlations. Techniques such as noise injection, stochastic weight averaging, and embedding regularization are being explored to improve model generalization and robustness. Noteworthy papers in this area include ODP-Bench, which provides a comprehensive benchmark for OOD performance prediction, and Spurious Correlation-Aware Embedding Regularization for Worst-Group Robustness, which proposes a novel approach to suppress spurious cues in feature representations. Additionally, papers such as I Detect What I Don't Know and Noise Injection demonstrate the effectiveness of incremental anomaly learning and noise injection techniques in improving model performance on OOD data.

Sources

ODP-Bench: Benchmarking Out-of-Distribution Performance Prediction

Imbalanced Classification through the Lens of Spurious Correlations

On the Structure of Floating-Point Noise in Batch-Invariant GPU Matrix Multiplication

Weakly Supervised Concept Learning with Class-Level Priors for Interpretable Medical Diagnosis

GAFD-CC: Global-Aware Feature Decoupling with Confidence Calibration for OOD Detection

Accounting for Underspecification in Statistical Claims of Model Superiority

Noise Injection: Improving Out-of-Distribution Generalization for Limited Size Datasets

I Detect What I Don't Know: Incremental Anomaly Learning with Stochastic Weight Averaging-Gaussian for Oracle-Free Medical Imaging

Spurious Correlation-Aware Embedding Regularization for Worst-Group Robustness

Linear Mode Connectivity under Data Shifts for Deep Ensembles of Image Classifiers

Built with on top of