The field of multimodal chart understanding and generation is rapidly advancing, with a focus on developing more sophisticated models that can accurately comprehend and generate charts across various domains. Recent developments have highlighted the importance of customized pre-training for chart-data alignment, dual-path training strategies, and the use of large language models to transform research papers into visual explanations. Noteworthy papers in this area include ChartScope, which introduces a novel Dual-Path training strategy for in-depth chart comprehension, and Manimator, which leverages Large Language Models to transform research papers into explanatory animations. Chart-R1 is also a significant contribution, introducing a chart-domain vision-language model with reinforcement learning fine-tuning for advanced chart reasoning. These advancements have significant implications for the development of more effective chart understanding and generation systems, with potential applications in education, research, and industry.