The field of natural language processing is moving towards more efficient and accurate methods for constructing knowledge graphs and enhancing large language models. Recent developments have focused on leveraging external knowledge to improve the performance of large language models, particularly in tasks that require context-based inference. Noteworthy papers include LKD-KGC, which proposes a novel framework for unsupervised domain-specific knowledge graph construction, and TableEval, a new benchmark designed to evaluate large language models on complex, multilingual, and multi-structured table question answering tasks. Multimodal tabular reasoning with privileged structured information is also a promising area of research, with the introduction of the Turbo framework, which leverages structured information to enhance multimodal large language models. Lastly, human-guided frameworks for reducing large language model reliance in multi-table question answering, such as the graph-based framework proposed, have shown effectiveness in handling complex, real-world scenarios.