The field of retrieval-augmented generation (RAG) is moving towards more advanced and sophisticated capabilities, with a focus on improving the ability of systems to reason and generate high-quality content. Recent research has highlighted the importance of developing frameworks and benchmarks to evaluate the performance of RAG systems, as well as the need for more effective methods for data quality management and human-AI collaboration. Notable papers in this area include: From Search to Reasoning: A Five-Level RAG Capability Framework for Enterprise Data, which proposes a new classification framework for RAG systems and evaluates state-of-the-art platforms. DS-STAR: Data Science Agent via Iterative Planning and Verification, which introduces a novel data science agent that can reliably navigate complex analyses involving diverse data sources. Are LLMs ready to help non-expert users to make charts of official statistics data?, which presents a structured evaluation of recent large language models' capabilities to generate charts from complex data in response to user queries.