The field of combinatorial optimization is witnessing a significant shift towards leveraging deep learning techniques to tackle complex problems. Researchers are exploring the potential of reinforcement learning, graph neural networks, and other advanced methods to improve solution quality and efficiency. A key trend is the development of novel architectures and frameworks that can effectively capture the underlying structure of optimization problems, such as the use of attention mechanisms and contrastive learning. These innovations are leading to state-of-the-art results in various domains, including vehicle routing and mixed-integer linear programming. Noteworthy papers include: An End-to-End Deep Reinforcement Learning Approach for Solving the Traveling Salesman Problem with Drones, which proposes a hierarchical Actor-Critic framework for solving the TSP-D problem. GAMA: A Neural Neighborhood Search Method with Graph-aware Multi-modal Attention for Vehicle Routing Problem, which introduces a graph-aware multi-modal attention model for VRP. CoCo-MILP: Inter-Variable Contrastive and Intra-Constraint Competitive MILP Solution Prediction, which explicitly models inter-variable contrast and intra-constraint competition for advanced MILP solution prediction. Planning in Branch-and-Bound: Model-Based Reinforcement Learning for Exact Combinatorial Optimization, which leverages a learned internal model of the B&B dynamics to discover improved branching strategies.