The field of explainable artificial intelligence (XAI) is moving towards developing more nuanced and comprehensive methods for evaluating and comparing model explainability. Researchers are proposing novel frameworks and techniques to address the challenges of balancing model performance and interpretability, particularly in high-stakes fields such as finance and healthcare. Notable papers in this area include: Unlocking the Black Box, which proposes a five-dimensional framework for evaluating explainable AI in credit risk, and RENTT, which presents a novel algorithm for transforming neural networks into decision trees to provide ground truth explanations. These advancements have the potential to increase trust and transparency in AI decision-making, and pave the way for more efficient and interpretable machine learning applications in various industries.