The field of machine learning is moving towards increased interpretability, with a focus on developing methods that provide insights into model decisions. Recent research has introduced new techniques for comparing and visualizing learned representations, enabling more direct and interpretable model comparisons. There is also a growing interest in concept-based models, which aim to extract human-understandable concepts from data and provide transparent decision-making. These models have shown promising results in various applications, including image classification and medical diagnosis. Furthermore, researchers are exploring new approaches to improve the reliability and generalization of concept-based models, such as incorporating object-centric information and using generative models to learn latent cost variables. Noteworthy papers include: Representational Difference Explanations, which proposes a method for discovering and visualizing differences between learned representations. Object Centric Concept Bottlenecks, which introduces a framework that combines concept-based models with pre-trained object-centric foundation models. Through a Steerable Lens, which proposes a novel framework for visualizing the implicit path between classes in a neural network.