The field of multi-agent systems is witnessing significant developments in knowledge representation and reasoning. Researchers are exploring new frameworks and models to enable more effective and transparent decision-making in complex systems. One notable direction is the integration of formalized knowledge representations with symbolic reasoning, allowing for more verifiable and explainable outcomes. Additionally, there is a growing interest in probabilistic approaches to belief revision and stability, which can capture the dynamics of belief updating in a more nuanced way. Paraconsistent frameworks are also being proposed to handle inconsistencies and contradictions in knowledge bases, offering a more robust and interpretable similarity measure. Noteworthy papers include: On Verifiable Legal Reasoning, which introduces a modular multi-agent framework for legal reasoning with formalized knowledge representations, and Probabilistically stable revision and comparative probability, which provides a representation theorem for probabilistically stable revision operators.