The field of large language models (LLMs) in multi-agent systems is moving towards exploring their capabilities and limitations in complex social interactions. Researchers are investigating how LLMs can be designed to ensure fairness, cooperation, and alignment with human values in various scenarios, such as peer-to-peer markets, public goods games, and moral dilemmas. One of the key findings is that LLMs can exhibit utilitarian behavior, prioritizing the greater good over individual interests, but the underlying mechanisms differ from those of humans. Additionally, studies have shown that LLMs can be prone to collusion and hallucinations, highlighting the need for careful evaluation and mitigation strategies. Noteworthy papers include: FairMarket-RL, which presents a novel framework for fairness-aware trading agents in peer-to-peer markets. Corrupted by Reasoning, which reveals that reasoning LLMs can struggle with cooperation in public goods games. Many LLMs Are More Utilitarian Than One, which demonstrates that LLMs can exhibit utilitarian behavior in moral dilemmas. Evaluating LLM Agent Collusion in Double Auctions, which examines the potential for LLM agents to collude in market interactions. Using multi-agent architecture to mitigate the risk of LLM hallucinations, which proposes a system to reduce hallucination risks in customer service applications.