The field of combinatorial optimization is witnessing a significant shift towards collaborative problem solving, with a focus on developing novel frameworks that enable multiple agents to work together to improve solving performance. This direction is driven by the potential of large language models (LLMs) to learn effective policies and heuristics that can be used to tackle complex optimization problems. The use of multi-agent systems, reinforcement learning, and game-theoretic approaches is becoming increasingly popular in this area, as they allow for the development of more sophisticated and adaptive solving strategies. Notable papers in this area include: Collab-Solver, which proposes a novel multi-agent-based policy learning framework for mixed-integer linear programming. EoH-S, which introduces a new formulation for automated heuristic set design using large language models. CTTS, which explores collective test-time scaling for enhancing the effectiveness of large language models. MOTIF, which proposes a turn-based interactive framework for multi-strategy optimization. LLM Collaboration With Multi-Agent Reinforcement Learning, which models LLM collaboration as a cooperative multi-agent reinforcement learning problem. RCR-Router, which introduces a modular and role-aware context routing framework for efficient collaboration in multi-agent LLMs.