The field of natural language processing is moving towards improving multilingual reasoning and text evaluation capabilities. Researchers are focusing on developing methods to enhance the performance of large language models in low-resource languages and to evaluate the quality of text in a more nuanced and scalable way. One of the key directions is the development of novel training methods and benchmarks that can assess the reasoning and evaluation capabilities of models in multiple languages. Another important area of research is the application of innovative techniques such as few-shot learning, transfer learning, and data augmentation to improve the robustness and accuracy of text classification and evaluation models. Noteworthy papers in this regard include Learn Globally, Speak Locally: Bridging the Gaps in Multilingual Reasoning, which introduces a novel benchmark and training method for multilingual factual reasoning, and Checklist Engineering Empowers Multilingual LLM Judges, which proposes a training-free framework for multilingual evaluation using large language models.