The field of multi-agent reinforcement learning is moving towards more principled and transparent methods for training and evaluating agents. Recent work has focused on developing innovative training signals and reward functions that can effectively guide agent behavior and promote cooperation. A key direction is the integration of game-theoretic and causal approaches to provide more nuanced and interpretable explanations of agent decisions and collective behavior. Notable papers in this area include: MACIE, which provides a framework for explaining collective behavior in multi-agent systems, and CRM, which introduces a collaborative reward design for enhancing reasoning in reinforcement learning. These advances have the potential to improve the robustness, interpretability, and accountability of multi-agent AI systems.