The field of multilingual large language models (LLMs) is rapidly advancing, with a focus on improving performance and consistency across languages. Recent developments have highlighted the importance of addressing language bias and improving cross-lingual alignment, with proposed solutions including batch-wise alignment strategies and multilingual representation alignment frameworks. These approaches have shown promising results, with improvements in non-English accuracy and multilingual generalization capability. Noteworthy papers include AlignX, which proposes a two-stage representation-level framework for enhancing multilingual performance, and TASER, which introduces a metric for automated translation quality assessment using large reasoning models. Additionally, research on cross-lingual information retrieval and multilingual reward modeling has made significant progress, with the introduction of new architectures and training methods. Overall, the field is moving towards more robust and equitable multilingual AI solutions, with a focus on improving performance and consistency across languages.