The field of artificial intelligence is rapidly advancing in game development and evaluation, with a focus on improving the capabilities of large language models and vision-language models. Recent research has explored the use of game code to enhance the reasoning capabilities of large language models, as well as the development of novel frameworks and benchmarks for evaluating the performance of these models in game-related tasks.
Notable developments include the creation of industry-level video generation models for marketing scenarios, the introduction of dynamic game platforms for evaluating the reasoning capabilities of large language models, and the proposal of modular frameworks for automated evaluation of procedural content generation in serious games.
Furthermore, advancements in photogrammetry are transforming digital content creation by enabling the rapid conversion of real-world objects into highly detailed 3D models, which can be used to enhance the realism and interactivity of virtual worlds.
Some papers are particularly noteworthy, including CRPE, which introduces a novel framework for enhancing the code reasoning capabilities of large language models, and Aquarius, which presents a family of industry-level video generation models for marketing scenarios. Code2Logic is also notable for its game-code-driven approach to enhancing the reasoning capabilities of vision-language models, while KORGym and lmgame-Bench provide valuable resources for evaluating the reasoning capabilities of large language models in interactive environments.