The field of multimodal reasoning and vision-language models is rapidly advancing, with a focus on improving the ability of models to reason and understand visual and linguistic information. Recent developments have seen the introduction of new frameworks and methods that enable models to better integrate visual and linguistic knowledge, leading to improved performance on a range of tasks, including visual question answering and medical image analysis. Notably, the use of iterative reasoning processes and multimodal retrieval-augmented generation has shown promise in enhancing the accuracy and trustworthiness of model outputs. Furthermore, the development of new benchmarks and evaluation metrics has highlighted the need for more nuanced and comprehensive assessments of model performance, particularly in complex domains such as medical imaging and art analysis. Overall, the field is moving towards more sophisticated and human-like models that can effectively reason and communicate across multiple modalities. Noteworthy papers include: VisRAG 2.0, which proposes an end-to-end framework for evidence-guided multi-image reasoning, and Think Twice to See More, which introduces a novel framework for iterative visual reasoning in medical vision-language models.