The field of large language models (LLMs) is rapidly advancing, with a growing focus on graph-related tasks. Recent research has shown that LLMs can be effectively used for tasks such as graph edit distance calculation, graph drawing, and knowledge graph completion. One of the key challenges in this area is the scalability of LLMs, which can be overcome by using novel sampling mechanisms and instruction-tuned frameworks. Another important aspect is the development of new tools and benchmarks for evaluating the performance of LLMs on graph-related tasks. Notable papers in this area include those that introduce innovative approaches to graph edit distance calculation, such as using LLM-generated code, and those that propose new benchmarks for evaluating the ability of LLMs to customize code while preserving visual outcomes. Overall, the field is moving towards more efficient, scalable, and interpretable graph processing using LLMs. Noteworthy papers include NeMo-Inspector, which simplifies the analysis of synthetic datasets, and GRAIL, which introduces a novel approach to graph edit distance calculation using LLM-generated code.