The field of multilingual AI and cross-lingual information retrieval is rapidly advancing, with a focus on improving the performance of large language models (LLMs) across multiple languages. Recent studies have highlighted the importance of considering the linguistic composition of training data and the need for effective strategies to mitigate the trade-off between cross-lingual and mono-lingual performance. The development of new benchmarks and evaluation frameworks, such as the AI Language Proficiency Monitor and theMarco-Bench-MIF, is facilitating the assessment of LLMs' capabilities across different languages and tasks. Furthermore, research on adapting definition modeling to new languages and on multimodal foundation models' ability to understand schematic diagrams is expanding the scope of multilingual AI. Notably, the introduction of the HanjaBridge technique has shown significant improvements in Korean language understanding, and the MapIQ benchmark is providing insights into the performance of multimodal large language models on map question answering tasks. Overall, the field is moving towards more inclusive and transparent AI systems, with a growing emphasis on evaluating and addressing the performance gaps between high- and low-resource languages. Noteworthy papers include the AI Language Proficiency Monitor, which provides a comprehensive multilingual benchmark for evaluating LLM performance, and the HanjaBridge paper, which presents a novel technique for improving Korean language understanding. The MapIQ benchmark is also noteworthy for its evaluation of multimodal large language models on map question answering tasks.