Efficient Reasoning and Causal Discovery with Large Language Models

The field of large language models (LLMs) is moving towards improving their efficiency and ability to reason causally. Researchers are exploring techniques to optimize self-consistency, a widely used test-time inference technique, to achieve state-of-the-art sample efficiency. Additionally, there is a growing interest in integrating LLMs with other planning methods, such as Hierarchical Task Network (HTN) methods, to enhance their problem-solving capabilities. Furthermore, surveying techniques for improving predictive accuracy at test-time and developing frameworks to teach LLMs causal reasoning are also gaining attention. Noteworthy papers include: Optimal Self-Consistency for Efficient Reasoning with Large Language Models, which introduces a novel variant of self-consistency that dynamically allocates samples to questions during inference. CARE: Turning LLMs Into Causal Reasoning Expert, which proposes a framework to enhance LLMs' causal-reasoning ability through supervised fine-tuning.

Sources

Optimal Self-Consistency for Efficient Reasoning with Large Language Models

Online Learning of HTN Methods for integrated LLM-HTN Planning

Test-time Scaling of LLMs: A Survey from A Subproblem Structure Perspective

CARE: Turning LLMs Into Causal Reasoning Expert

Built with on top of