The field of large language models (LLMs) is rapidly evolving, with a focus on improving their performance on knowledge-intensive tasks. Recent developments have centered around enhancing the ability of LLMs to incorporate external knowledge, mitigate hallucination, and resolve conflicts between different sources of information. One key area of research is the use of retrieval-augmented generation (RAG) to enhance LLMs with relevant and up-to-date information. However, this approach can also introduce challenges, such as conflicting information and uncertainty, which must be addressed through innovative solutions. Notable papers in this area have proposed novel frameworks for resolving knowledge conflicts, detecting uncertainties, and improving the faithfulness of LLMs to the retrieved context. Overall, the field is moving towards more sophisticated and reliable LLMs that can effectively integrate external knowledge and reason about uncertainties. Noteworthy papers include: FaithfulRAG, which proposes a novel framework for resolving knowledge conflicts by explicitly modeling discrepancies between the model's parametric knowledge and retrieved context. AbstentionBench, which introduces a large-scale benchmark for evaluating the ability of LLMs to abstain from answering unanswerable questions. ThinkQE, which proposes a test-time query expansion framework that encourages deeper and comprehensive semantic exploration. Query-Level Uncertainty, which introduces a method for detecting knowledge boundaries via query-level uncertainty. Reasoning Models Are More Easily Gaslighted Than You Think, which systematicaly evaluates the ability of reasoning models to withstand misleading user input.