The field of artificial intelligence is moving towards a greater emphasis on explainability and transparency, with a focus on developing techniques that can provide insights into the decision-making processes of AI models. Recent research has highlighted the importance of understanding how AI models use language and make decisions, and has developed new methods for attributing model answers to specific regions of visual data. Additionally, there is a growing recognition of the need for more interpretable and trustworthy AI systems, with a focus on developing new cognitive architectures that can shape language and provide more deliberate and reflective interaction with AI explanations. Noteworthy papers in this area include: RADAR, which introduces a semi-automatic approach to obtain a benchmark dataset for evaluating and enhancing the capabilities of multimodal large language models to attribute their reasoning process. CausalSent, which develops a two-headed RieszNet-based neural network architecture for interpretable sentiment classification with causal inference. Model Science, which introduces a conceptual framework for a new discipline that places the trained model at the core of analysis, aiming to interact, verify, explain, and control its behavior across diverse operational contexts.