Advances in Large Language Models for Reasoning and Retrieval

The field of large language models (LLMs) is moving towards more efficient and effective reasoning and retrieval capabilities. Recent developments have focused on improving the ability of LLMs to reason and retrieve information in a more accurate and efficient manner. This includes the use of hybrid thinking, which enables LLMs to switch between reasoning and direct answering, and the development of new frameworks for generative retrieval, such as Retrieval-in-the-Chain and LLM-guided Hierarchical Retrieval. Additionally, there is a growing interest in using LLMs for zero-shot learning and demographic reasoning. Noteworthy papers in this area include Unilaw-R1, which introduces a large language model tailored for legal reasoning, and ZeroGR, which proposes a zero-shot generative retrieval framework that leverages natural language instructions to extend GR across a wide range of IR tasks. Overall, the field is moving towards more advanced and specialized LLMs that can handle complex tasks and provide more accurate and efficient results.

Sources

Don't Throw Away Your Pretrained Model

Unilaw-R1: A Large Language Model for Legal Reasoning with Reinforcement Learning and Iterative Inference

Adaptive Dual Reasoner: Large Reasoning Models Can Think Efficiently by Hybrid Reasoning

ZeroGR: A Generalizable and Scalable Framework for Zero-Shot Generative Retrieval

Merlin's Whisper: Enabling Efficient Reasoning in LLMs via Black-box Adversarial Prompting

Revisiting Model Interpolation for Efficient Reasoning

ThinkPilot: Steering Reasoning Models via Automated Think-prefixes Optimization

HiCoTraj:Zero-Shot Demographic Reasoning via Hierarchical Chain-of-Thought Prompting from Trajectory

Reasoning Pattern Matters: Learning to Reason without Human Rationales

Demystifying Hybrid Thinking: Can LLMs Truly Switch Between Think and No-Think?

Retrieval-in-the-Chain: Bootstrapping Large Language Models for Generative Retrieval

LLM-guided Hierarchical Retrieval

Big Reasoning with Small Models: Instruction Retrieval at Inference Time

Finding Answers in Thought Matters: Revisiting Evaluation on Large Language Models with Reasoning

Built with on top of