Advances in Uncertainty Quantification and Transparency in Deep Learning

The field of deep learning is moving towards increased transparency and uncertainty quantification, with a focus on developing methods that can provide accurate and reliable predictions in complex and dynamic environments. This is particularly important in safety-critical applications such as autonomous driving and medical diagnosis, where incorrect predictions can have severe consequences. Recent research has explored the use of uncertainty quantification techniques, such as Bayesian methods and ensemble approaches, to improve the reliability of deep learning models. Additionally, there is a growing interest in developing transparency techniques, such as saliency maps and feature importance, to provide insights into the decision-making processes of deep learning models. Noteworthy papers in this area include: Semantic and Feature Guided Uncertainty Quantification of Visual Localization for Autonomous Vehicles, which develops an uncertainty quantification approach for visual localization in autonomous driving. Transparency Techniques for Neural Networks trained on Writer Identification and Writer Verification, which applies transparency techniques to neural networks for writer identification and verification. GNN's Uncertainty Quantification using Self-Distillation, which proposes a novel method for quantifying the predictive uncertainty of graph neural networks using self-distillation. Towards Reliable Detection of Empty Space: Conditional Marked Point Processes for Object Detection, which proposes an object detection model grounded in spatial statistics to quantify uncertainty outside detected bounding boxes.

Sources

Semantic and Feature Guided Uncertainty Quantification of Visual Localization for Autonomous Vehicles

Transparency Techniques for Neural Networks trained on Writer Identification and Writer Verification

Identifiability of Deep Polynomial Neural Networks

A Framework for Uncertainty Quantification Based on Nearest Neighbors Across Layers

EBC-ZIP: Improving Blockwise Crowd Counting with Zero-Inflated Poisson Regression

A Spatio-Temporal Point Process for Fine-Grained Modeling of Reading Behavior

GNN's Uncertainty Quantification using Self-Distillation

Towards Reliable Detection of Empty Space: Conditional Marked Point Processes for Object Detection

Built with on top of