The field of uncertainty quantification and representation learning is moving towards more accurate and efficient methods for estimating and decomposing uncertainty in deep learning models. Recent work has focused on developing novel frameworks for uncertainty estimation, such as variance-gated measures and surrogate representation inference, which can provide more robust and reliable uncertainty quantification. Additionally, there is a growing interest in post-hoc methods for uncertainty quantification, which can be applied to pre-trained models without requiring retraining. Notable papers in this area include: Surrogate Representation Inference for Noisy Text and Image Annotations, which introduces a neural network architecture for learning low-dimensional representations of unstructured data. Post-Hoc Split-Point Self-Consistency Verification for Efficient, Unified Quantification of Aleatoric and Epistemic Uncertainty in Deep Learning, which proposes a single-forward-pass framework for jointly capturing aleatoric and epistemic uncertainty. Disproving the Feasibility of Learned Confidence Calibration Under Binary Supervision, which proves an impossibility theorem for learning well-calibrated confidence estimates with meaningful diversity under binary supervision.