The field of artificial intelligence is witnessing a significant shift towards developing models that can mimic human-like strategic reasoning and invention capabilities. Recent studies have focused on evaluating the strategic reasoning capabilities of large language models (LLMs) in various domains, including game-playing and financial applications. These evaluations have led to the development of novel benchmarks and frameworks, such as CHBench and FinCDM, which assess the ability of LLMs to reason strategically and make informed decisions. Furthermore, research has also explored the potential of AI systems to invent new games and problems, with studies demonstrating the ability of LLMs to generate novel game designs and evaluate their quality. Noteworthy papers in this area include the proposal of LegoNE, a framework that enables the automatic discovery of expert-level Nash equilibrium algorithms, and the introduction of HeroBench, a benchmark for evaluating long-horizon planning and structured reasoning in virtual worlds. These advancements have significant implications for the development of more sophisticated AI systems that can collaborate with humans and drive innovation in various fields.