The field of large language models (LLMs) is moving towards a deeper understanding of their generation capabilities and social interactions. Recent studies have explored the impact of structured output on LLMs, revealing that causal inference can help uncover the underlying relationships between output format and generation quality. Additionally, research has focused on the emergence of altruism in LLM agents, highlighting the importance of social simulation in understanding their behavior. The development of methods for generating counterfactuals and evaluating the logical consistency of expert opinions are also notable advancements. Noteworthy papers include: The Emergence of Altruism in Large-Language-Model Agents Society, which introduces a novel approach to social simulation and identifies distinct archetypes of LLMs. Large Language Models as Nondeterministic Causal Models presents a simpler method for generating counterfactuals, directly applicable to any black-box LLM.