The field of natural language processing is moving towards a deeper understanding of the underlying mechanisms of text embeddings and transformer architectures. Recent research has highlighted the importance of correcting bias in text embeddings, with studies showing that consistent bias can be decomposed and removed using refined renormalization techniques. Additionally, the analysis of positional bias in multimodal embedding models has revealed that such biases can negatively impact model performance, and that they manifest differently across modalities. The development of new positional encoding mechanisms, such as RollPE, has also shown promise in improving model performance. Furthermore, research has provided a unified interpretation of transformer architecture, connecting self-attention to distributional semantics principles. Noteworthy papers include: Correcting Mean Bias in Text Embeddings, which proposes a plug-and-play solution to improve the performance of existing models, and Decoupling Positional and Symbolic Attention Behavior in Transformers, which provides a deeper understanding of the positional versus symbolic dichotomy of attention heads behavior.