Advances in Uncertainty Quantification and Interpretability

The field of artificial intelligence is moving towards a greater emphasis on uncertainty quantification and interpretability. Researchers are developing new methods to quantify and manage uncertainty in complex models, such as tree ensembles and neural networks. These methods include the use of Dempster-Shafer evidence theory, entropic potential of events, and symbolic regression. The goal is to provide more comprehensive and reliable explanations of model decisions, which is essential in high-stakes domains such as healthcare analytics. Noteworthy papers in this area include: FNBT, which proposes a new method for open-world information fusion based on Dempster-Shafer theory, demonstrating superior performance in pattern classification tasks. UbiQTree, which introduces an approach for decomposing uncertainty in SHAP values into aleatoric, epistemic, and entanglement components, providing a more comprehensive understanding of the reliability and interpretability of SHAP-based attributions. Symbolic Quantile Regression, which predicts conditional quantiles with symbolic regression, outperforming transparent models and performing comparably to a strong black-box baseline without compromising transparency. Extending the Entropic Potential of Events, which demonstrates how the concept of entropic potential can enhance uncertainty quantification, decision-making, and interpretability in artificial intelligence.

Sources

FNBT: Full Negation Belief Transformation for Open-World Information Fusion Based on Dempster-Shafer Theory of Evidence

Symbolic Quantile Regression for the Interpretable Prediction of Conditional Quantiles

UbiQTree: Uncertainty Quantification in XAI with Tree Ensembles

Extending the Entropic Potential of Events for Uncertainty Quantification and Decision-Making in Artificial Intelligence

Built with on top of