The field of data visualization and vision-language models is rapidly evolving, with a focus on developing more human-centered approaches. Recent research has highlighted the importance of evaluating data visualization understanding in artificial systems using measures that are similar to those used to assess human abilities. This has led to a greater understanding of the limitations of current vision-language models and the need for further development. Noteworthy papers include:
- CHART-6, which evaluated eight vision-language models on six data visualization literacy assessments and found that these models performed worse than human participants on average.
- Qwen Look Again, which introduced a novel Vision-Language Reasoning Model designed to mitigate hallucinations by incorporating a vision-text reflection process that guides the model to re-attention visual information during reasoning.