The field of vision-language models and data visualization is rapidly evolving, with a focus on developing more sophisticated and human-like AI systems. Recent research has highlighted the limitations of current vision-language models in tasks such as visual measurement reading and open-ended association reasoning. To address these limitations, new benchmarks and datasets have been introduced, including MeasureBench and MM-OPERA, which provide a more comprehensive evaluation of model capabilities. Additionally, there is a growing emphasis on developing more effective data visualization techniques, including the use of iterative dashboard refinement and code generation models such as PlotCraftor. Noteworthy papers in this area include MeasureBench, which introduces a benchmark for visual measurement reading and highlights the limitations of current vision-language models. MM-OPERA is also notable, as it presents a systematic benchmark for evaluating open-ended association reasoning in large vision-language models.