The field of transformer research is moving towards a deeper understanding of the underlying mechanisms and structures that govern their behavior. Recent studies have focused on developing new methods for interpreting and analyzing transformer models, such as linear response frameworks, sparse attention mechanisms, and self-ablation techniques. These approaches have shed light on the importance of understanding the geometry of the loss landscape, the role of individual attention heads, and the relationship between sparsity and interpretability. Notably, the development of tools and frameworks like Prisma and GMAR has facilitated the analysis and interpretation of vision transformers, while techniques like softpick and self-ablation have improved the efficiency and transparency of transformer models. Overall, these advances are contributing to a more nuanced understanding of transformer models and their applications. Noteworthy papers include: Studying Small Language Models with Susceptibilities, which introduced a linear response framework for interpretability. Prisma, which presented an open-source framework for mechanistic interpretability in vision and video. Self-Ablating Transformers, which introduced a novel self-ablation mechanism to investigate the connection between sparsity and interpretability.