The field of multimodal large language models (MLLMs) is moving towards evaluating and improving their mental visualization capabilities. Recent studies have highlighted the importance of assessing the robustness of text-to-image models and their ability to generate images that conform to specified factors of variation in input text prompts. The development of new benchmarks and evaluation frameworks is underway to address the limitations of current models in recognizing visual patterns and performing spatial reasoning. Notably, research is also exploring the potential of integrating mental imagery into machine thinking frameworks to enhance the thinking capabilities of AI systems. Some noteworthy papers in this area include: Hyperphantasia, a synthetic benchmark designed to evaluate the mental visualization abilities of MLLMs, which reveals a substantial gap between the performance of humans and MLLMs. Beyond vividness, a study that uses tools from natural language processing to analyze free-text descriptions of hallucinations from over 4,000 participants, providing insights into individual differences in visual imagery.