Advancements in Retrieval-Augmented Generation

The field of retrieval-augmented generation (RAG) is moving towards more advanced and sophisticated capabilities, with a focus on improving the ability of systems to reason and generate high-quality content. Recent research has highlighted the importance of developing frameworks and benchmarks to evaluate the performance of RAG systems, as well as the need for more effective methods for data quality management and human-AI collaboration. Notable papers in this area include: From Search to Reasoning: A Five-Level RAG Capability Framework for Enterprise Data, which proposes a new classification framework for RAG systems and evaluates state-of-the-art platforms. DS-STAR: Data Science Agent via Iterative Planning and Verification, which introduces a novel data science agent that can reliably navigate complex analyses involving diverse data sources. Are LLMs ready to help non-expert users to make charts of official statistics data?, which presents a structured evaluation of recent large language models' capabilities to generate charts from complex data in response to user queries.

Sources

From Search to Reasoning: A Five-Level RAG Capability Framework for Enterprise Data

DS-STAR: Data Science Agent via Iterative Planning and Verification

Machines in the Margins: A Systematic Review of Automated Content Generation for Wikipedia

Auto-ARGUE: LLM-Based Report Generation Evaluation

Human-Centered Evaluation of RAG outputs: a framework and questionnaire for human-AI collaboration

HLTCOE at TREC 2024 NeuCLIR Track

Data Quality Challenges in Retrieval-Augmented Generation

Are LLMs ready to help non-expert users to make charts of official statistics data?

Evaluation Sheet for Deep Research: A Use Case for Academic Survey Writing

Built with on top of