The field of large language models is moving towards improved uncertainty quantification, with a focus on detecting hallucinations and preventing false or misleading content. Recent research has explored the use of state-of-the-art uncertainty quantification techniques, such as those based on information theory and multi-model ensembles. These approaches aim to provide more reliable uncertainty estimates, which is crucial for high-stakes applications. Furthermore, there is a growing interest in understanding how large language models internally represent and process their predictions, and how this affects their uncertainty. Noteworthy papers include: UQLM, which introduces a Python package for hallucination detection using uncertainty quantification techniques. On the Effect of Uncertainty on Layer-wise Inference Dynamics, which demonstrates that uncertainty does not significantly affect inference dynamics. An Information-Theoretic Perspective on Multi-LLM Uncertainty Estimation, which proposes a method for aggregating uncertainty estimates from multiple models. ViLU: Learning Vision-Language Uncertainties for Failure Prediction, which introduces a framework for uncertainty quantification in vision-language models.