The field of vision-language models is moving towards increased integration of multimodal techniques, enabling more accurate and efficient processing of complex data. This is evident in the development of frameworks that combine textual and visual inputs to generate high-quality outputs, such as patent specifications and clinical reports. The use of multimodal architectures is also being explored in various applications, including license plate recognition, document parsing, and reasoning in latent space. Noteworthy papers in this area include: PatentVision, which enhances accuracy by combining fine-tuned vision language models with domain-specific training tailored to patents. PaddleOCR-VL, a resource-efficient model that achieves state-of-the-art performance in multilingual document parsing. NEO, a novel family of native VLMs that rival top-tier modular counterparts across diverse real-world scenarios.