Advances in Uncertainty Quantification and Robustness in Machine Learning

The field of machine learning is moving towards a greater emphasis on uncertainty quantification and robustness, with a focus on developing methods that can provide accurate and reliable predictions in the presence of uncertainty and noise. This trend is driven by the need for trustworthy and explainable models, particularly in safety-critical applications. Recent research has led to the development of new metrics and frameworks for evaluating and improving model calibration, robustness, and uncertainty quantification. Notably, the introduction of interval neural networks and robust random vector functional link networks has shown promise in addressing these challenges.

Some papers are particularly noteworthy for their innovative approaches, including the comprehensive review of classifier probability calibration metrics, which provides a thorough understanding of the relationships between different metrics. The introduction of the Robustness Difference Index (RDI) for evaluating adversarial robustness in deep neural networks is also a significant contribution, offering a efficient and effective method for assessing model robustness. Additionally, the development of the FCGHunter framework for evaluating the robustness of graph-based Android malware detection systems has demonstrated impressive results in identifying vulnerabilities in these systems.

Sources

A comprehensive review of classifier probability calibration metrics

Model Evaluation in the Dark: Robust Classifier Metrics with Missing Labels

Three Types of Calibration with Properties and their Semantic and Formal Relationships

An Axiomatic Assessment of Entropy- and Variance-based Uncertainty Quantification in Regression

RDI: An adversarial robustness evaluation metric for deep neural networks based on sample clustering features

Introducing Interval Neural Networks for Uncertainty-Aware System Identification

Newton-Puiseux Analysis for Interpretability and Calibration of Complex-Valued Neural Networks

FCGHunter: Towards Evaluating Robustness of Graph-Based Android Malware Detection

R^2VFL: A Robust Random Vector Functional Link Network with Huber-Weighted Framework

Exponentially Consistent Low Complexity Tests for Outlier Hypothesis Testing with Distribution Uncertainty

Built with on top of