The field of artificial intelligence is moving towards a greater emphasis on explainability and causality, with a focus on developing techniques that can provide insights into the decision-making processes of complex models. This trend is driven by the need for more transparent and trustworthy AI systems, particularly in high-stakes applications such as healthcare and finance. Recent research has made significant progress in this area, with the development of new methods for explaining deep learning models, discovering causal relationships in data, and learning from simulators. Notably, the use of partial orders, exemplars, and natural language rules has shown promise in providing more accurate and interpretable explanations. Furthermore, the integration of causal machine learning and meta-learning has enabled the estimation of individualized treatment effects and the development of more personalized AI systems. Overall, the field is advancing rapidly, with a growing recognition of the importance of explainability and causality in AI systems. Noteworthy papers include: Revealing Inherent Concurrency in Event Data, which introduces a novel algorithm for process discovery that preserves inherent concurrency. GnnXemplar, which proposes a global explainer for Graph Neural Networks that uses natural language rules to explain predictions. Causal Machine Learning for Surgical Interventions, which develops a multi-task meta-learning framework for estimating individualized treatment effects in surgical decision-making.