The field of natural language processing is moving towards integrating large language models (LLMs) with knowledge graphs to enhance their factual knowledge and improve their performance on various tasks. Recent research has focused on developing methods to materialize LLM knowledge into knowledge bases, synthesizing conversational data to improve LLMs' conversational capabilities, and leveraging knowledge graphs to enhance LLM-based recommendation systems. Another area of interest is the development of techniques for information extraction, relation extraction, and link prediction, which can be used to discover new knowledge and relationships from large corpora of text. Notably, the use of Bayesian optimization and multi-label contrastive learning has shown promising results in improving the performance and efficiency of these systems. Some noteworthy papers include: GPTKB v1.5, which introduces a massive knowledge base for exploring factual LLM knowledge. DocTalk, which presents a novel approach to synthesizing conversational data from existing text corpora. KERAG_R, which proposes a knowledge-enhanced retrieval-augmented generation model for recommendation. Topic Modeling and Link-Prediction for Material Property Discovery, which presents an AI-driven hierarchical link prediction framework for discovering new materials. SCoRE, which introduces a streamlined corpus-based relation extraction system using multi-label contrastive learning and Bayesian kNN.