The field of explainable AI is rapidly advancing, with a focus on developing innovative methods for risk assessment and management. Recent research has highlighted the importance of interpretable machine learning models in identifying local drivers of risk and their cross-county variation. These models have the potential to guide decision-making and inform strategies to mitigate risks in various domains, including wildfire risk assessment and clinical safety. Notably, the integration of explainability into clinical safety frameworks has emerged as a key area of research, enabling the use of interpretability outputs as structured safety evidence. Furthermore, the development of probabilistic models for risk assessment has shown promise in predicting extreme events, such as wildfires, and identifying key ecosystem-specific drivers. Overall, the field is moving towards the development of more transparent, trustworthy, and actionable AI systems that can support effective decision-making and risk management. Noteworthy papers include: WildfireGenome, which advances wildfire risk assessment through interpretable machine learning; Embedding Explainable AI in NHS Clinical Safety, which proposes an Explainability-Enabled Clinical Safety Framework; and SCI, which presents a closed-loop, control-theoretic framework for modeling interpretability as a regulated state.