The field of large language models is moving towards more efficient training and optimization methods. Researchers are exploring new techniques to reduce the computational costs and memory requirements of training these models, while also improving their performance and generalization capabilities. One direction is the development of novel optimizers that can adapt to the specific needs of large language models, such as Conda and AuON, which have shown promising results in improving convergence speed and stability. Another area of focus is the use of distributed training methods, such as partial parameter updates and dual batch sizes, which can significantly reduce training time and improve model accuracy. Additionally, researchers are investigating new approaches to gradient computation and optimization, such as per-example gradients and randomized matrix sketching, which can provide more efficient and effective ways to train large language models. Notable papers in this area include Conda, which achieves 2-2.5 times the convergence speed of AdamW, and AuON, which demonstrates strong performance without constructing semi-orthogonal matrices. Muon is also a notable optimizer that has been shown to outperform Adam in tail-end associative memory learning and achieve multiplicative efficiency gains when combined with Multi-Head Latent Attention and Mixture-of-Experts.