The field of machine translation is moving towards addressing nuanced language and ambiguity, with a focus on improving disambiguation in multi-domain translation and handling idiomatic expressions. Researchers are exploring the capabilities and limitations of large language models in processing complex linguistic phenomena, such as multiword expressions and entity translation. The development of evaluation frameworks and metrics is also a key area of focus, with the aim of improving the accuracy and cultural adaptability of machine translation systems. Noteworthy papers include: Evaluating Large Language Models on Multiword Expressions in Multilingual and Code-Switched Contexts, which highlights the challenges of handling nuanced language, and DMDTEval, which presents a systematic evaluation framework for disambiguation in multi-domain translation. Team ACK at SemEval-2025 Task 2 also presents a comprehensive evaluation of machine translation models for English-Korean pairs, exposing gaps in automatic evaluation metrics.