Large Language Models in Legal and Multi-Agent Scenarios

The field of large language models (LLMs) is moving towards increased application in legal and multi-agent scenarios, with a focus on improving coordination, reasoning, and decision-making. Recent research has explored the use of LLMs in predicting human reasonableness judgments, identifying legal risks in commercial contracts, and estimating worst-case frontier risks. Additionally, studies have investigated the emergence of trust and strategic argumentation in LLMs during collaborative law-making. Noteworthy papers include: The Silicon Reasonable Person, which demonstrates that LLMs can learn to identify patterns driving human reasonableness judgments. NomicLaw, which introduces a structured multi-agent simulation where LLMs engage in collaborative law-making, showcasing their latent social reasoning and persuasive capabilities.

Sources

Strategic Communication and Language Bias in Multi-Agent LLM Coordination

The Silicon Reasonable Person: Can AI Predict How Ordinary People Judge Reasonableness?

ContractEval: Benchmarking LLMs for Clause-Level Legal Risk Identification in Commercial Contracts

Estimating Worst-Case Frontier Risks of Open-Weight LLMs

NomicLaw: Emergent Trust and Strategic Argumentation in LLMs During Collaborative Law-Making

Built with on top of