The field of artificial intelligence is moving towards more efficient and adaptable world modeling and reinforcement learning techniques. Recent developments have focused on improving the ability of AI agents to learn and generalize from sparse observations, with a particular emphasis on complex and non-gridworld domains. Program synthesis using Large Language Models (LLMs) has emerged as a promising approach, allowing for the representation of world models as source code and supporting strong generalization from little data. Additionally, there is a growing interest in developing frameworks that can enhance generalization in multi-scenario games and improve the performance of reinforcement learning agents. Noteworthy papers include: PoE-World, which introduces a novel program synthesis method for effectively modeling complex domains. Multiple Weaks Win Single Strong, which proposes a novel approach to model ensemble that enhances reinforcement learning agents with task-specific semantic understandings driven by LLMs.