The field of multilingual large language models (LLMs) is rapidly advancing, with a focus on improving language control, translation capabilities, and reasoning abilities. Recent research has explored innovative methods for controlling language generation, including sparse feature steering and cross-lingual knowledge transfer. These approaches have shown promising results in mitigating hallucinations and improving factual knowledge transfer across languages. Additionally, there is a growing interest in evaluating cross-lingual alignment capabilities and assessing the impact of language mixing on bilingual LLM reasoning. Noteworthy papers in this area include: CCL-XCoT, which proposes a two-stage fine-tuning framework for mitigating hallucination in MLLMs, and Seed-X, which introduces a family of open-source LLMs with 7B parameters that achieve performance comparable to leading closed-source models. Overall, the field is moving towards more efficient, effective, and interpretable models that can handle complex language tasks and generalize well across languages.