Advances in Retrieval-Augmented Generation and Knowledge Graph-Based Question Answering

The field of natural language processing is witnessing significant advancements in retrieval-augmented generation and knowledge graph-based question answering. Researchers are exploring innovative approaches to improve the accuracy and efficiency of large language models in generating human-like text and answering complex questions. One notable direction is the development of multimodal question answering systems that can handle both textual and visual inputs. Another area of focus is the creation of benchmarks and evaluation metrics to assess the performance of these models in real-world scenarios. Furthermore, there is a growing interest in using knowledge graphs to enhance the reasoning capabilities of large language models and mitigate the issue of hallucinations. Overall, the field is moving towards more sophisticated and interpretable models that can effectively leverage external knowledge sources to generate accurate and informative responses. Noteworthy papers include: EviNote-RAG, which introduces a structured retrieve--note--answer pipeline to enhance the robustness of retrieval-augmented generation models. MTQA, which proposes a novel matrix-of-thought structure to improve the reasoning capabilities of large language models in complex question answering tasks.

Sources

The Temporal Game: A New Perspective on Temporal Relation Extraction

GOSU: Retrieval-Augmented Generation with Global-Level Optimized Semantic Unit-Centric Framework

Decomposing and Revising What Language Models Generate

EviNote-RAG: Enhancing RAG Models via Answer-Supportive Evidence Notes

Multimodal Iterative RAG for Knowledge Visual Question Answering

Robust Knowledge Editing via Explicit Reasoning Chains for Distractor-Resilient Multi-Hop QA

Dissecting Atomic Facts: Visual Analytics for Improving Fact Annotations in Language Model Evaluation

Towards Open-World Retrieval-Augmented Generation on Knowledge Graph: A Multi-Agent Collaboration Framework

FActBench: A Benchmark for Fine-grained Automatic Evaluation of LLM-Generated Text in the Medical Domain

CMRAG: Co-modality-based document retrieval and visual question answering

HF-RAG: Hierarchical Fusion-based RAG with Multiple Sources and Rankers

QuesGenie: Intelligent Multimodal Question Generation

Improving Factuality in LLMs via Inference-Time Knowledge Graph Construction

Explainable Knowledge Graph Retrieval-Augmented Generation (KG-RAG) with KG-SMILE

Continuous Monitoring of Large-Scale Generative AI via Deterministic Knowledge Graph Structures

MTQA:Matrix of Thought for Enhanced Reasoning in Complex Question Answering

CANDY: Benchmarking LLMs' Limitations and Assistive Potential in Chinese Misinformation Fact-Checking

RTQA : Recursive Thinking for Complex Temporal Knowledge Graph Question Answering with Large Language Models

DecMetrics: Structured Claim Decomposition Scoring for Factually Consistent LLM Outputs

KERAG: Knowledge-Enhanced Retrieval-Augmented Generation for Advanced Question Answering

Research on Multi-hop Inference Optimization of LLM Based on MQUAKE Framework

Towards Meta-Cognitive Knowledge Editing for Multimodal LLMs

ZhiFangDanTai: Fine-tuning Graph-based Retrieval-Augmented Generation Model for Traditional Chinese Medicine Formula

UNH at CheckThat! 2025: Fine-tuning Vs Prompting in Claim Extraction

Vector embedding of multi-modal texts: a tool for discovery?

Built with on top of