The field of machine learning is moving towards the development of more interpretable models, with a focus on understanding how features participate in the decision-making process. This is particularly important in high-stakes fields such as healthcare, where model interpretability is crucial for making informed decisions. Recent research has introduced new models and frameworks that prioritize interpretability, such as the use of shape functions and counterfactual explanations. These models have shown promising results in terms of predictive performance and have the potential to be used in a variety of applications, including image-to-music generation, graph classification, and social recommendation. Notable papers in this area include the introduction of Multiplicative-Additive Constrained Models, which improve upon existing models by disentangling interactive and independent terms, and the development of MUSE-Explainer, which provides clear and human-friendly explanations for music graph classification models. Additionally, the SoREX framework offers a self-explanatory approach to social recommendation, and GDLNN combines programming languages and neural networks for accurate and easy-to-explain graph classification.