The field of data visualization is rapidly evolving, with a focus on developing innovative methods for generating and interacting with visualizations. Recent research has explored the use of large language models (LLMs) to generate visualizations, edit charts, and create interactive systems for particle effect exploration and customization. Notable advancements include the development of benchmarks for evaluating text-to-visualization models and the creation of datasets for training and testing LLMs in various visualization tasks. Furthermore, researchers have investigated the use of tactile charts to support comprehension and learning of complex visualizations for blind and low-vision individuals, and have proposed structured grammars for chart annotation. Overall, the field is moving towards more automated, interactive, and accessible visualization systems.
Noteworthy papers include: ChartGen, which presents a fully-automated pipeline for code-guided synthetic chart generation. Text2Vis, which introduces a benchmark for assessing text-to-visualization models and proposes a cross-modal actor-critic agentic framework to refine textual answers and visualization code. VizGenie, which advances scientific visualization through a self-improving, agentic framework that orchestrates domain-specific and dynamically generated modules. InSituTale, which enables presenters to manipulate visualizations through physical object interactions, enhancing augmented data storytelling. ChatVis, which aids LLMs in generating Python code for scientific visualization tasks without requiring retraining or fine-tuning.