The field of retrieval-augmented generation (RAG) is moving towards more principled and effective methods for incorporating external knowledge into large language models (LLMs). Recent developments have focused on improving the accuracy and efficiency of RAG systems, particularly in the context of long or noisy contexts. Notable advancements include the use of conformal prediction for coverage-controlled context reduction, the integration of hyperbolic geometry into graph-based RAG, and the development of unified frameworks for joint optimization of retrieval and generation. These innovations have the potential to significantly enhance the performance of RAG systems and improve their ability to provide accurate and informative responses. Noteworthy papers include: Principled Context Engineering for RAG, which demonstrates the effectiveness of conformal prediction for context reduction. HyperbolicRAG, which introduces hyperbolic geometry into graph-based RAG and achieves state-of-the-art performance on multiple QA benchmarks. CLaRa, which proposes a unified framework for joint optimization of retrieval and generation and achieves state-of-the-art compression and reranking performance.