The field of Artificial Intelligence is moving towards developing more transparent and trustworthy systems, particularly in high-stakes decision-making environments such as healthcare and finance. Explainable AI (XAI) techniques are being explored to provide insights into AI decision-making, enhancing model reliability and aiding users in verifying AI-generated assessments. Researchers are working on integrating legal considerations into XAI systems, ensuring that explanations are actionable and contestable. Additionally, there is a growing interest in applying XAI to various domains, including food engineering and nuclear energy. Noteworthy papers include: Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-being, which introduces an evaluation framework for developing explainable AI systems in healthcare. Legally-Informed Explainable AI, which makes the case for integrating legal considerations into XAI systems. eXplainable AI for data driven control, which proposes an XAI methodology based on Inverse Optimal Control to obtain local explanations for the behavior of a controller.