The field of large language models (LLMs) is moving towards increased application in legal and multi-agent scenarios, with a focus on improving coordination, reasoning, and decision-making. Recent research has explored the use of LLMs in predicting human reasonableness judgments, identifying legal risks in commercial contracts, and estimating worst-case frontier risks. Additionally, studies have investigated the emergence of trust and strategic argumentation in LLMs during collaborative law-making. Noteworthy papers include: The Silicon Reasonable Person, which demonstrates that LLMs can learn to identify patterns driving human reasonableness judgments. NomicLaw, which introduces a structured multi-agent simulation where LLMs engage in collaborative law-making, showcasing their latent social reasoning and persuasive capabilities.