The field of deep learning is moving towards a greater emphasis on uncertainty quantification, with a focus on developing methods to accurately capture and represent uncertainty in model predictions. This is driven by the need for more reliable and trustworthy models, particularly in safety-critical applications. Recent work has explored various approaches to uncertainty quantification, including ensemble-based methods, decomposition of aleatoric and epistemic uncertainty, and probabilistic measurement of scenario suite representativeness. These advances have the potential to significantly improve the robustness and reliability of deep learning models. Notable papers in this area include:
- A position paper that formalizes the problem of uncertainty quantification in generative model learning and proposes potential research directions.
- A paper that introduces a lightweight inference-time framework for disentangling aleatoric and epistemic uncertainty in deep feature space, resulting in significant computational savings.
- A paper that proposes a novel framework for compressing deep ensembles into a single model for classification tasks, achieving superior or comparable uncertainty estimation while reducing inference overhead.