The field of speech and language technologies is moving towards more inclusive and personalized solutions. Researchers are developing innovative tools to support non-native English speakers in STEM education, such as real-time lexical cues and interactive rhythm training systems. Additionally, there is a growing focus on multilingual vision-language models, with advancements in retrieval-augmented generation and concept-aware captioning. Speech-driven 3D facial animation is also becoming increasingly sophisticated, with new methods for personalized animation and phonetic context-dependent viseme modeling. Furthermore, technologies are being developed to assist low-vision learners, including personalized visual guidance tools. Noteworthy papers include: CONCAP, which introduces a multilingual image captioning model that integrates retrieved captions with image-specific concepts, and MemoryTalker, which enables realistic and accurate 3D facial motion synthesis by reflecting speaking style only with audio input. VeasyGuide is also notable, as it provides personalized visual guidance for low-vision learners on instructor actions in presentation videos.