The field of machine learning is moving towards increased interpretability, with a focus on developing models that can provide transparent and explainable decisions. Recent research has made significant progress in this area, with the development of new methods and techniques for interpreting complex models. One of the key trends is the use of probabilistic prototypes and graph-based methods to provide insights into the decision-making processes of neural networks. Additionally, there is a growing interest in developing models that can provide multi-granular interpretability, allowing users to understand the reasoning behind model predictions at different levels of abstraction. Overall, the field is shifting towards a more transparent and explainable approach to machine learning, with significant implications for trust, safety, and reliability. Noteworthy papers include: VI3NR, which improves the initialization of implicit neural representations, and From GNNs to Trees, which introduces a novel Tree-like Interpretable Framework for graph classification. These papers demonstrate the potential for increased interpretability and transparency in machine learning models, and highlight the importance of ongoing research in this area.