The field of deep learning is moving towards a greater emphasis on uncertainty quantification, with a focus on developing methods that can accurately predict and quantify uncertainty in model outputs. This is being driven by the need for more reliable and robust models, particularly in applications where uncertainty can have significant consequences. Researchers are exploring a range of approaches, including Bayesian neural networks, Gaussian processes, and Laplace approximations, to improve the accuracy and reliability of uncertainty estimates. Notable papers in this area include the proposal of a Semantic-Aware Gaussian Process calibration framework, which enhances interpretability and effectiveness in assessing predictive reliability, and the introduction of a confidence optimization probabilistic encoding method, which improves distance reliability and enhances representation learning. Another interesting development is the Distributional Uncertainty for Out-of-Distribution Detection method, which jointly models distributional uncertainty and identifies OoD and misclassified regions using free energy.