The field of artificial intelligence is moving towards more sophisticated and human-like game playing capabilities, with a focus on complex games such as poker and card games. Recent developments have shown that large language models can master multiple complex card games simultaneously, and that algorithms such as Deep Monte Carlo Counterfactual Regret Minimization can achieve state-of-the-art results in extensive-form games. However, the robustness of numerical reasoning in large language models remains an open question, with studies showing that their proficiency is largely confined to recalling and executing known algorithms rather than performing generative problem-solving. Noteworthy papers include: Robust Deep Monte Carlo Counterfactual Regret Minimization, which proposes a framework for selective component deployment in neural MCCFR, achieving significant improvements in exploitability on Kuhn and Leduc Poker. Can Large Language Models Master Complex Card Games, which demonstrates the potential of LLMs in mastering complex card games through supervised fine-tuning on high-quality data. A Fragile Number Sense, which probes the elemental limits of numerical reasoning in LLMs, revealing that their apparent numerical reasoning is more akin to sophisticated pattern-matching than flexible, analytical thought.