The field of language models is moving towards a deeper understanding of their internal mechanisms and decision-making processes. Recent studies have focused on uncovering the geometric and algebraic structures that underlie these models, with a particular emphasis on interpretability and explainability. Researchers are developing new methods to analyze and visualize the internal workings of language models, including the use of dimensionality reduction techniques and the study of circuit and causal variable localization. These advances have the potential to improve our understanding of how language models make predictions and enable the development of more transparent and trustworthy models. Notable papers in this area include: Internalizing Tools as Morphisms in Graded Transformers, which introduces a graded formulation of internal symbolic computation for transformers. Findings of the BlackboxNLP 2025 Shared Task, which presents a community-wide reproducible comparison of mechanistic interpretability techniques. Geometry of Decision Making in Language Models, which studies the geometry of hidden representations in large language models through the lens of intrinsic dimension. Emergence and Localisation of Semantic Role Circuits in LLMs, which proposes a method to study how language models implement semantic roles. Scale-Agnostic Kolmogorov-Arnold Geometry in Neural Networks, which extends KAG analysis to MNIST digit classification using 2-layer MLPs with systematic spatial analysis at multiple scales.