The field of natural language processing is moving towards leveraging large language models to improve machine translation capabilities, particularly for low-resource languages. Recent studies have shown that fine-tuning neural rankers on pairs of language varieties can improve retrieval effectiveness and that synthetic data generation methods can significantly enhance cross-lingual open-ended generation capabilities. The use of large language models has also been explored for low-resource language machine translation, with promising results. Additionally, there is a growing emphasis on the importance of including indigenous knowledge in language models and developing technologies that can help revitalize endangered languages. Noteworthy papers include: Improving Low-Resource Retrieval Effectiveness using Zero-Shot Linguistic Similarity Transfer, which proposes a method for fine-tuning neural rankers to improve retrieval effectiveness, and XL-Instruct: Synthetic Data for Cross-Lingual Open-Ended Generation, which introduces a new benchmark and synthetic data generation method for cross-lingual generation. VNJPTranslate: A comprehensive pipeline for Vietnamese-Japanese translation also presents a systematic approach to addressing the challenges of low-resource language pairs.