Advancements in Uncertainty Quantification and Machine Learning

The field of machine learning is moving towards a greater emphasis on uncertainty quantification and robustness. Recent developments have focused on improving the accuracy and reliability of models, particularly in high-stakes applications such as healthcare. Techniques such as Bayesian neural networks, variational autoencoders, and conformal prediction are being used to quantify and manage uncertainty. Additionally, there is a growing interest in epistemic artificial intelligence, which aims to develop models that can recognize and manage their own ignorance. Notable papers in this area include CoCoAFusE, which introduces a novel Bayesian covariates-dependent modeling technique, and Epistemic Wrapping, which proposes a methodology for improving uncertainty estimation in classification tasks. Overall, the field is shifting towards a more nuanced understanding of uncertainty and its role in machine learning.

Sources

CoCoAFusE: Beyond Mixtures of Experts via Model Fusion

Aggregation of Dependent Expert Distributions in Multimodal Variational Autoencoders

An Approach for Handling Missing Attribute Values in Attribute-Based Access Control Policy Mining

Epistemic Wrapping for Uncertainty Quantification

Uncovering Population PK Covariates from VAE-Generated Latent Spaces

Cooperative Bayesian and variance networks disentangle aleatoric and epistemic uncertainties

Uncertainty Quantification for Machine Learning in Healthcare: A Survey

Early Prediction of Sepsis: Feature-Aligned Transfer Learning

Prediction Models That Learn to Avoid Missing Values

Learning Survival Distributions with the Asymmetric Laplace Distribution

False Promises in Medical Imaging AI? Assessing Validity of Outperformance Claims

Conformal Prediction with Corrupted Labels: Uncertain Imputation and Robust Re-weighting

Prediction via Shapley Value Regression

Clustering with Communication: A Variational Framework for Single Cell Representation Learning

Position: Epistemic Artificial Intelligence is Essential for Machine Learning Models to Know When They Do Not Know

Performance Estimation in Binary Classification Using Calibrated Confidence

Nearly Optimal Sample Complexity for Learning with Label Proportions

Built with on top of