Advances in Large Language Models for Structured Knowledge Reasoning

The field of natural language processing is moving towards improving the reasoning capabilities of Large Language Models (LLMs) in structured knowledge reasoning tasks. Researchers are exploring various methods to enhance the logic consistency and reliability of LLMs, including the use of chain-of-thought prompting, graph reasoning, and knowledge distillation. These approaches aim to address the limitations of LLMs in handling complex relational information and structured data, such as knowledge graphs and tables. Notable papers in this area include:

  • The proposal of GRIP, a framework that enables LLMs to internalize complex relational information from graphs through fine-tuning tasks.
  • The introduction of the Logits-to-Logic framework, which corrects logical defects in LLM outputs to improve logic consistency in structured knowledge reasoning.
  • The development of self-correction distillation methods to improve the structured data question answering ability of small-scale LLMs.
  • The proposal of Knowledge-Augmented Long-CoT Generation for Complex Biomolecular Reasoning, which integrates LLMs with knowledge graph-based multi-hop reasoning chains to improve factual grounding and reasoning reliability in biomolecular tasks.

Sources

Effectiveness of Chain-of-Thought in Distilling Reasoning Capability from Large Language Models

GRIP: In-Parameter Graph Reasoning through Fine-Tuning Large Language Models

Last Layer Logits to Logic: Empowering LLMs with Logic-Consistent Structured Knowledge Reasoning

Self-Correction Distillation for Structured Data Question Answering

Knowledge-Augmented Long-CoT Generation for Complex Biomolecular Reasoning

Built with on top of