The field of natural language processing is witnessing significant developments in graph understanding and knowledge engine construction. Researchers are exploring innovative methods to enhance large language models (LLMs) for graph-related tasks, such as knowledge graph completion and automated essay scoring. A key direction is the integration of structured context and semantic information to improve the performance of LLMs. This includes the use of graph neural networks, transformer-based architectures, and contrastive learning techniques to capture complex relationships and nuances in data. Noteworthy papers in this area include Efficient Graph Understanding with LLMs via Structured Context Injection, which proposes a framework for structured context injection to guide LLMs in solving graph problems, and StructCoh, a graph-enhanced contrastive learning framework that combines structural reasoning with representation space optimization. Additionally, papers like TransGAT and OTESGN demonstrate the effectiveness of transformer-based graph neural networks and optimal transport-enhanced syntactic-semantic graph networks for automated essay scoring and aspect-based sentiment analysis, respectively.