Advances in Knowledge Graph Construction and Large Language Models

The field of natural language processing is moving towards more efficient and accurate methods for constructing knowledge graphs and enhancing large language models. Recent developments have focused on leveraging external knowledge to improve the performance of large language models, particularly in tasks that require context-based inference. Noteworthy papers include LKD-KGC, which proposes a novel framework for unsupervised domain-specific knowledge graph construction, and TableEval, a new benchmark designed to evaluate large language models on complex, multilingual, and multi-structured table question answering tasks. Multimodal tabular reasoning with privileged structured information is also a promising area of research, with the introduction of the Turbo framework, which leverages structured information to enhance multimodal large language models. Lastly, human-guided frameworks for reducing large language model reliance in multi-table question answering, such as the graph-based framework proposed, have shown effectiveness in handling complex, real-world scenarios.

Sources

LKD-KGC: Domain-Specific KG Construction via LLM-driven Knowledge Dependency Parsing

LLM Inference Enhanced by External Knowledge: A Survey

TableEval: A Real-World Benchmark for Complex, Multilingual, and Multi-Structured Table Question Answering

Multimodal Tabular Reasoning with Privileged Structured Information

Plugging Schema Graph into Multi-Table QA: A Human-Guided Framework for Reducing LLM Reliance

Built with on top of