The field of multimodal data analysis and visualization is rapidly evolving, with a focus on improving the ability of models to understand and interpret complex visual data. Recent developments have centered around enhancing the capabilities of vision-language models (VLMs) to perform tasks such as chart understanding, visual reasoning, and causal inference. Researchers are exploring innovative approaches, including the use of large language models, reinforcement learning, and data synthesis techniques, to advance the state-of-the-art in these areas. Noteworthy papers in this area include: Automated Visualization Makeovers with LLMs, which introduces a system for semi-automatically generating constructive criticism to improve data visualizations. InfoCausalQA, a novel benchmark designed to evaluate causal reasoning grounded in infographics, highlights the need for advancing the causal reasoning abilities of multimodal AI systems. Effective Training Data Synthesis for Improving MLLM Chart Understanding presents a data synthesis pipeline that improves chart understanding capabilities of multimodal large language models. InterChart, a diagnostic benchmark, evaluates how well VLMs reason across multiple related charts, exposing systematic limitations in current models. From Charts to Fair Narratives investigates geo-economic biases in VLM-generated chart summaries and explores inference-time prompt-based debiasing techniques. VisFinEval, a large-scale Chinese benchmark, comprehensively evaluates the capabilities of MLLMs in automating complex financial analysis. The Perils of Chart Deception demonstrates the susceptibility of VLMs to deceptive visual designs and highlights the need for robust safeguards against visual misinformation. BigCharts-R1 proposes a dataset creation pipeline and a comprehensive training framework that integrates supervised fine-tuning with reinforcement learning to enhance model robustness and generalization in chart reasoning.