Advancements in Large Language Models and Knowledge Retrieval

The field of natural language processing is moving towards leveraging large language models (LLMs) to enhance various tasks such as semantic matching, web page classification, and knowledge retrieval. Researchers are exploring innovative ways to incorporate external knowledge into LLMs, including using reinforcement learning to optimize search usage and integrating structural entropy-guided knowledge navigation. Noteworthy papers in this area include one that proposes a novel LLM-enhanced Q-learning framework for the Capacitated Vehicle Routing Problem with Time Windows, and another that introduces a Reinforced Internal-External Knowledge Synergistic Reasoning Agent (IKEA) for efficient adaptive search. Other significant contributions include the development of DynamicRAG, a framework that dynamically adjusts the order and number of retrieved documents based on the query, and InForage, a reinforcement learning framework that formalizes retrieval-augmented reasoning as a dynamic information-seeking process.

Sources

A Large Language Model-Enhanced Q-learning for Capacitated Vehicle Routing Problem with Time Windows

Using External knowledge to Enhanced PLM for Semantic Matching

Web Page Classification using LLMs for Crawling Support

Structural Entropy Guided Agent for Detecting and Repairing Knowledge Deficiencies in LLMs

DynamicRAG: Leveraging Outputs of Large Language Model as Feedback for Dynamic Reranking in Retrieval-Augmented Generation

Matching Tasks with Industry Groups for Augmenting Commonsense Knowledge

Reinforced Internal-External Knowledge Synergistic Reasoning for Efficient Adaptive Search Agent

A Comparative Analysis of Static Word Embeddings for Hungarian

SEM: Reinforcement Learning for Search-Efficient Large Language Models

Scent of Knowledge: Optimizing Search-Enhanced Reasoning with Information Foraging

Built with on top of