Advancements in Retrieval-Augmented Generation

The field of retrieval-augmented generation (RAG) is rapidly advancing, with a focus on improving the effectiveness and efficiency of large language models (LLMs) in various applications. Recent developments have explored the use of diverse queries, dynamic selection and integration of multiple retrievers, and the incorporation of specialized non-oracle human information sources as retrievers. These innovations have led to significant improvements in performance, with some models outperforming larger counterparts and achieving state-of-the-art results. Notably, the integration of RAG with other techniques, such as reinforcement learning and self-supervised learning, has also shown promising results. Furthermore, the application of RAG in domains such as medical tasks, low-carbon optimization, and conversational dialogue systems has demonstrated its potential to drive real-world impact. Some noteworthy papers in this area include MoR, which introduces a mixture of sparse, dense, and human retrievers, and Revela, which proposes a unified framework for self-supervised retriever learning via language modeling. Additionally, papers like CCRS and COIN have made significant contributions to the evaluation and improvement of RAG systems, highlighting the importance of comprehensive evaluation metrics and uncertainty quantification.

Sources

MoR: Better Handling Diverse Queries with a Mixture of Sparse, Dense, and Human Retrievers

From RAG to Agentic: Validating Islamic-Medicine Responses with LLM Agents

HybridRAG-based LLM Agents for Low-Carbon Optimization in Low-Altitude Economy Networks

From General to Targeted Rewards: Surpassing GPT-4 in Open-Ended Long-Context Generation

SGIC: A Self-Guided Iterative Calibration Framework for RAG

Revela: Dense Retriever Learning via Language Modeling

Mechanisms vs. Outcomes: Probing for Syntax Fails to Explain Performance on Targeted Syntactic Evaluations

Conversational Intent-Driven GraphRAG: Enhancing Multi-Turn Dialogue Systems through Adaptive Dual-Retrieval of Flow Patterns and Context Semantics

Accurate and Energy Efficient: Local Retrieval-Augmented Generation Models Outperform Commercial Large Language Models in Medical Tasks

Controlled Retrieval-augmented Context Evaluation for Long-form RAG

SACL: Understanding and Combating Textual Bias in Code Retrieval with Semantic-Augmented Reranking and Localization

CCRS: A Zero-Shot LLM-as-a-Judge Framework for Comprehensive RAG Evaluation

COIN: Uncertainty-Guarding Selective Question Answering for Foundation Models with Provable Risk Guarantees

AI Assistants to Enhance and Exploit the PETSc Knowledge Base

Engineering RAG Systems for Real-World Applications: Design, Development, and Evaluation

Metadata Enrichment of Long Text Documents using Large Language Models

Response Quality Assessment for Retrieval-Augmented Generation via Conditional Conformal Factuality

Enhancing Automatic Term Extraction with Large Language Models via Syntactic Retrieval

Leveraging LLM-Assisted Query Understanding for Live Retrieval-Augmented Generation

Built with on top of