The field of large language models (LLMs) is rapidly advancing, with a growing focus on uncertainty quantification and explainability. Recent research has explored various methods for evaluating and improving the reliability of LLMs, including the integration of uncertainty quantification methods in argumentative LLMs and the development of novel frameworks for uncertainty quantification in generative video models. Additionally, there is a increasing interest in using counterfactual explanations to provide insights into model behavior and decision-making processes. Noteworthy papers in this area include 'Uncertainty as Feature Gaps: Epistemic Uncertainty Quantification of LLMs in Contextual Question-Answering', which proposes a theoretically grounded approach to quantify epistemic uncertainty, and 'How Confident are Video Models? Empowering Video Models to Express their Uncertainty', which presents a framework for uncertainty quantification of generative video models. Overall, the field is moving towards developing more robust and reliable LLMs that can provide accurate and transparent results.