Advances in Uncertainty Quantification and Deep Learning

The field of deep learning is moving towards a greater emphasis on uncertainty quantification, with a focus on developing methods to accurately quantify and propagate uncertainty in neural networks. This is driven by the need for more robust and reliable models, particularly in safety-critical applications. Researchers are exploring various techniques, including the development of new uncertainty propagation methods and the analysis of bounds on deep neural network partial derivatives. Additionally, there is a growing interest in understanding the theoretical foundations of deep learning, including the role of depth and the importance of feature qualification. Noteworthy papers include:

  • Uncertainty Quantification for Data-Driven Machine Learning Models in Nuclear Engineering Applications, which highlights the importance of uncertainty quantification in machine learning models.
  • Bounds on Deep Neural Network Partial Derivatives with Respect to Parameters, which provides rigorous mathematical formulations of polynomial bounds on partial derivatives of deep neural networks.
  • Uncertainty propagation in feed-forward neural network models, which develops new uncertainty propagation methods for feed-forward neural networks with leaky ReLU activation functions.

Sources

Uncertainty Quantification for Data-Driven Machine Learning Models in Nuclear Engineering Applications: Where We Are and What Do We Need?

Feature Qualification by Deep Nets: A Constructive Approach

Bounds on Deep Neural Network Partial Derivatives with Respect to Parameters

Uncertainty propagation in feed-forward neural network models

Elementwise Layer Normalization

Approximation results on neural network operators of convolution type

Built with on top of