The field of multimodal reasoning and vision-language models is rapidly advancing, with a focus on improving model performance and addressing cultural biases. Recent developments have highlighted the importance of high-quality data and carefully curated datasets in achieving state-of-the-art results. Notably, innovations in self-supervised fine-tuning and data-centric approaches have shown promise in rivaling proprietary models. Furthermore, the creation of culturally grounded datasets and function-centric frameworks has helped to reduce socioeconomic performance gaps and improve model generalizability. Overall, the field is moving towards more inclusive and equitable AI systems. Noteworthy papers include: Closing the Gap, which demonstrates the effectiveness of supervised fine-tuning with high-quality data, DEJIMA, which introduces a large-scale Japanese dataset for image captioning and visual question answering, and Culture Affordance Atlas, which proposes a novel function-centric framework for categorizing objects across diverse cultural contexts.