The field of large language models (LLMs) is rapidly advancing, with a focus on developing agents that can cooperate and interact with each other in complex environments. Recent research has explored the capabilities of LLMs in open-source games, where they can participate by submitting computer programs, enabling interpretability, transparency, and formal verifiability. Another area of research has investigated the altruistic tendencies of LLMs, revealing a gap between their implicit associations, self-reports, and behavioral altruism. The development of new evaluation methods, such as Concordia, has enabled the assessment of LLM-based agents' ability to cooperate in zero-shot, mixed-motive environments. Noteworthy papers include: Evaluating LLMs in Open-Source Games, which evaluates the capabilities of leading LLMs in predicting and classifying program strategies. Do Large Language Models Walk Their Talk, which investigates the altruistic tendencies of LLMs and reveals a virtue signaling gap. AsymPuzl: An Asymmetric Puzzle for multi-agent cooperation, which introduces a minimal but expressive two-agent puzzle environment to isolate communication under information asymmetry. Strategic Self-Improvement for Competitive Agents in AI Labour Markets, which puts forward a groundbreaking new framework to capture the real-world economic forces that shape agentic labor markets.