The field of artificial intelligence is witnessing significant advancements in strategic reasoning and game playing, driven by the capabilities of large language models (LLMs). Recent developments have focused on enhancing the ability of LLMs to solve complex problems, such as combinatorial optimization and imperfect-information games. Researchers are exploring novel approaches to integrate LLMs with traditional game-theoretic methods, leading to improved performance in various game formats, including poker and chess. Notably, the use of LLMs is enabling the development of more general and interpretable agents that can learn to master complex environments through explicit reasoning and planning. The evaluation of LLMs in strategic reasoning tasks is also being reexamined, with a focus on rethinking preference semantics in arena-style evaluation. Some noteworthy papers in this area include: SpinGPT, which presents a large-language-model approach to playing poker correctly, achieving tolerant accuracy of 78% in decision-making. Cogito, Ergo Ludo, which introduces a novel agent architecture that leverages an LLM to build an explicit understanding of its environment's mechanics and strategy, demonstrating successful learning in diverse grid-world tasks.