Advances in Robustness and Interpretability of Machine Learning Models

The field of machine learning is moving towards developing more robust and interpretable models. Recent research has focused on addressing the challenges of label noise, out-of-distribution detection, and model interpretability. Innovative approaches such as adaptive label correction, faithfulness-guided ensemble interpretation, and bistochastic normalization of confusion matrices have shown promising results in improving model performance and reliability. Noteworthy papers include:

  • Ordinal Adaptive Correction, which proposes a novel data-centric method for adaptive correction of noisy labels in ordinal image classification tasks.
  • Toward Faithfulness-guided Ensemble Interpretation of Neural Network, which introduces a framework for enhancing the breadth and effectiveness of faithfulness in neural network explanations.
  • Tackling the Noisy Elephant in the Room, which demonstrates a robust OOD detection framework that integrates loss correction techniques with low-rank and sparse decomposition methods.

Sources

Stage-wise Adaptive Label Distribution for Facial Age Estimation

Ordinal Adaptive Correction: A Data-Centric Approach to Ordinal Image Classification with Noisy Labels

Toward Faithfulness-guided Ensemble Interpretation of Neural Network

On the Normalization of Confusion Matrices: Methods and Geometric Interpretations

Prior Distribution and Model Confidence

DCV-ROOD Evaluation Framework: Dual Cross-Validation for Robust Out-of-Distribution Detection

Tackling the Noisy Elephant in the Room: Label Noise-robust Out-of-Distribution Detection via Loss Correction and Low-rank Decomposition

Built with on top of