The field of large language models (LLMs) is rapidly evolving, with a focus on improving their capabilities in optimization modeling and graph analysis. Recent developments have led to the creation of innovative frameworks and tools that enhance the performance of LLMs in these areas. One notable direction is the use of graph-theoretic evaluation frameworks, which enable the assessment of LLMs' capabilities in formulating linear and mixed-integer linear programs. Another significant advancement is the integration of LLMs with specialized tools and techniques, such as equality saturation and Bayesian network structure discovery, to improve their reasoning and problem-solving abilities. Additionally, researchers are exploring the use of LLMs in zero-shot graph learning, where they can learn to reason about graph structures without requiring large amounts of training data. Noteworthy papers in this area include ORGEval, which proposes a graph-theoretic evaluation framework for LLMs in optimization modeling, and GraphChain, which enables LLMs to analyze complex graphs through dynamic sequences of specialized tools. Other notable papers include Bayesian Network Structure Discovery Using Large Language Models, which proposes a unified framework for Bayesian network structure discovery using LLMs, and Empowering LLMs with Structural Role Inference for Zero-Shot Graph Learning, which introduces a training-free dual-perspective framework for structure-aware graph reasoning.