The field of vision-language models is rapidly advancing, with a focus on developing more practical and efficient models for assistive technologies. Researchers are exploring new architectures and techniques to improve the performance of vision-language models in tasks such as walking assistance, action analysis, and multimodal reasoning. Notably, the development of models that can reduce output redundancy and minimize temporal redundancy is gaining attention. Additionally, the integration of vision-language models with other modalities, such as action decoding, is being investigated to enable more seamless human-computer interaction. Some noteworthy papers in this area include Less Redundancy, which proposes a walking assistance model with less redundancy, and NinA, which presents a fast and expressive alternative to diffusion-based decoders for vision-language-action models. HieroAction is also a notable paper, introducing a vision-language model that delivers accurate and structured assessments of human actions. InternVL3.5 is another significant contribution, advancing open-source multimodal models in versatility, reasoning, and efficiency.