The field of natural language processing and graph learning is witnessing a significant shift towards integrating large language models (LLMs) with graphs, enabling the combination of semantic understanding and structured reasoning. This integration has the potential to enhance the performance of various applications, including recommendation systems, biomedical analysis, and knowledge-intensive question answering. Recent research has focused on developing novel frameworks and architectures that can effectively combine the strengths of LLMs and graphs, such as sequential, parallel, and multi-module frameworks. Additionally, there is a growing interest in addressing challenges related to over-aggregating, scalability, and interpretability in graph learning. Noteworthy papers in this area include: Large Language Models Meet Text-Attributed Graphs, which provides a comprehensive survey of LLM-TAG integration frameworks and applications. Relieving the Over-Aggregating Effect in Graph Transformers, which proposes a plug-and-play method to mitigate over-aggregating in graph attention. ATOM, which introduces a few-shot and scalable approach for building and continuously updating temporal knowledge graphs from unstructured texts. BambooKG, which presents a neurobiologically-inspired frequency-weight knowledge graph that enhances retrieval-augmented generation. LINK-KG, which proposes a modular framework for constructing coreference-resolved knowledge graphs for human smuggling networks.