The field of large language models (LLMs) is moving towards a greater emphasis on uncertainty quantification, with a focus on developing methods to predict and explain the confidence of model outputs. This direction is driven by the need for more reliable and trustworthy AI systems, particularly in high-stakes applications such as medical diagnostics and automated essay assessment. Researchers are exploring various approaches, including conformal prediction, approximate Bayesian computation, and explainable uncertainty estimation, to provide valid uncertainty guarantees and improve the overall performance of LLMs. Noteworthy papers in this area include: Quantifying Uncertainty in Natural Language Explanations of Large Language Models for Question Answering, which proposes a novel uncertainty estimation framework for natural language explanations. Uncertainty Quantification of Large Language Models using Approximate Bayesian Computation, which improves accuracy and calibration of LLMs using a likelihood-free Bayesian inference approach.