The field of large language models (LLMs) is moving towards more efficient optimization and quantization techniques to reduce memory consumption and improve performance. Researchers are exploring new methods to optimize LLM training, such as using information geometry and quantum metrics to better understand the optimization landscape. Additionally, there is a focus on developing more efficient quantization methods, including techniques that reduce rounding and clipping errors, and those that enable blockwise optimization to eliminate the need for memory-intensive full-model backpropagation. Noteworthy papers in this area include BASE-Q, which introduces a simple yet powerful approach that combines bias correction and asymmetric scaling to effectively reduce rounding and clipping errors, and PAROAttention, which proposes a novel technique to reorganize attention patterns to alleviate the challenges of sparsification and quantization. Other notable papers include AnTKV, which leverages Anchor Token-aware Vector Quantization to compress the KV cache, and Outlier-Safe Pre-Training, which introduces a practical guideline to proactively prevent outlier formation in LLM training.