The field of natural language processing and artificial intelligence is witnessing significant developments in the design of efficient transformer architectures and training methods. Recent studies have focused on improving the computational efficiency and scalability of transformer models, which are crucial for their deployment in real-world applications. One of the key directions is the development of novel attention mechanisms, such as Grouped Differential Attention and Compressed Convolutional Attention, which aim to reduce the computational cost and memory requirements of traditional attention mechanisms. Another important area of research is the design of optimized training methods, including the use of low-precision formats, regularization techniques, and vectorized flash attention algorithms. These advancements have the potential to significantly improve the performance and efficiency of transformer models, enabling their application in a wider range of tasks and domains. Noteworthy papers in this area include the introduction of Exponent-Concentrated FP8, a lossless compression framework for GenAI model weights, and the development of RACE Attention, a kernel-inspired alternative to Softmax Attention that achieves linear time complexity. Additionally, the proposal of REG, a novel optimizer that replaces Muon's aggressive matrix sign operator with the Row-and-Column-Scaling operator, has shown promising results in improving training stability and performance.