The field of medical language understanding and translation is moving towards integrating domain-specific structured knowledge into large language models to improve performance. This is evident in the development of hybrid frameworks that combine knowledge graphs and reinforcement learning to generate scientific explanations and improve predictive accuracy. The use of multilingual variants, medical synonyms, and domain-specific ontologies is also becoming increasingly important in enhancing the quality of medical translations. Noteworthy papers include MedCOD, which significantly improves English-to-Spanish medical translation quality, and REx, which generates scientific explanations based on link prediction in knowledge graphs. Code Like Humans is also a notable contribution, providing a multi-agent solution for medical coding that supports the full ICD-10 coding system.