The field of multimodal large language models (MLLMs) is moving towards developing more advanced evaluation benchmarks to assess their reasoning capabilities. Researchers are focusing on creating benchmarks that can evaluate the ability of MLLMs to reason about complex and real-world scenarios, including multimodal chain of thought and visual reasoning. These benchmarks are designed to test the models' ability to process and generate content across modalities, such as text and images, and to evaluate their performance in tasks that require intermediate thinking traces. The development of these benchmarks is crucial for improving the performance of MLLMs and for understanding their limitations. Notable papers in this area include the Human-Aligned Bench, which proposes a fine-grained assessment of reasoning ability in MLLMs, and the MMMR benchmark, which evaluates multi-modal reasoning with explicit thinking. Additionally, the RBench-V benchmark assesses models' vision-indispensable reasoning abilities, and the LENS benchmark evaluates MLLMs' ability to handle image-invariable prompts. The FRANK Model is also a significant contribution, as it enables training-free reasoning and reflection in MLLMs.