The field of natural language processing is witnessing significant advancements in multilingual language modeling and speech processing. Researchers are actively working on developing and evaluating large language models (LLMs) for low-resource languages, highlighting the need for more investment in these areas to address the performance gap between high-resource and low-resource languages. New benchmarks and datasets are being introduced to evaluate the performance of LLMs on a wide range of tasks, including language identification, text classification, question answering, and translation tasks on both speech and text modalities. Additionally, there is a growing focus on developing effective and robust speech-to-text (STT) systems for low-resource languages, with notable improvements in word error rate and BLEU scores. Dialect normalization is also being explored as a means to transform dialectal text into standard language, enabling the use of standard-language tools downstream. Noteworthy papers include: mSTEB, which introduces a new benchmark to evaluate LLMs on speech and text tasks for low-resource languages. Advancing STT for Low-Resource Real-World Speech, which presents a new dataset and fine-tuned models for Swiss German speech-to-text. GigaChat Family, which introduces a family of Russian LLMs with state-of-the-art performance. Towards Open Foundation Language Model and Corpus for Macedonian, which creates a large corpus and a state-of-the-art model for the Macedonian language.