The field of Artificial Intelligence is moving towards a direction where explainability and transparency are becoming increasingly important. Researchers are focusing on developing methods and techniques to provide insights into the decision-making processes of AI systems, making them more trustworthy and reliable. This trend is evident in the development of explainable AI frameworks, evaluation protocols, and visualization tools. The goal is to enable domain experts and users to understand the reasoning behind AI-driven decisions, which is crucial in high-stakes applications such as healthcare and climate science. Noteworthy papers in this regard include the proposal of a framework for systematic assessment and reporting of explainable AI features, which provides a comprehensive overview of the evaluation protocols and metrics for XAI methods. Another significant contribution is the introduction of a cognitive model for understanding explanations, which highlights the importance of cognitive accessibility and perceptual optimization in XAI. Additionally, the development of low-code and no-code strategies for climate dashboards and the application of explainable AI in real-world settings, such as the detection of AI-generated videos and the diagnosis of Age-related Macular Degeneration, demonstrate the potential of XAI to drive positive impact in various domains.