The field of explainable AI is moving towards developing innovative methods for explaining complex models in safety-critical applications such as autonomous driving, healthcare, and finance. Recent developments have focused on improving the interpretability and transparency of deep learning models, with a particular emphasis on counterfactual explanations and model-agnostic explainability techniques. Notable papers in this area include the proposal of a novel guided reverse process for categorical features based on an approximation to the Gumbel-softmax distribution, and the introduction of a latent diffusion model for video counterfactual explanations. These advancements have the potential to increase trust and reliability in AI systems, and to facilitate their deployment in high-stakes domains.