The field of Natural Language Processing (NLP) is rapidly advancing, with a strong focus on multilingual capabilities and language modeling. Recent research has explored the role of language families and morphology in cross-linguistic transfer, with findings indicating that language family proximity and morphological similarity can significantly impact model performance. Additionally, there is a growing interest in evaluating and improving the performance of large language models (LLMs) in low-resource languages, with techniques such as synthetic data generation and multitask learning showing promise. The development of new benchmarks and evaluation frameworks, such as MUG-Eval and MAPS, is also enabling more comprehensive assessments of LLMs and agentic AI systems in multilingual settings. Notable papers in this area include the proposal of MUG-Eval, a novel framework for evaluating LLMs' multilingual generation capabilities, and the introduction of MAPS, a multilingual benchmark suite for evaluating agentic AI systems. Furthermore, research on word order change and language evolution has led to the proposal of a universal underlying mechanism based on word class length, and the development of new methods for probing subphonemes in morphology models has improved our understanding of phonological feature encoding in transformers.