The field of graph learning is rapidly advancing with the integration of large language models (LLMs). Recent developments have shown that LLMs can be effective in learning graph representations, particularly in scenarios where graph structures are complex or dynamic. Researchers are exploring various approaches to combine LLMs with graph neural networks (GNNs) to leverage the strengths of both paradigms. One key direction is the use of LLMs to generate text-based representations of graph nodes and edges, which can then be used as input to GNNs. Another approach involves using LLMs as a preprocessing step to extract relevant information from graph-structured data before applying GNNs. These innovations have the potential to improve the performance of graph learning models in a wide range of applications, including recommendation systems, social network analysis, and natural language processing. Noteworthy papers include: Temporal Graph Talker, which introduces a novel framework for temporal graph learning using LLMs; Rel-LLM, which proposes a GNN-based encoder to generate structured relational prompts for LLMs; and Graph-MLLM, which presents a comprehensive benchmark for multimodal graph learning using MLLMs.