The field of multimodal reasoning and verification is rapidly advancing, with a focus on improving the reliability and accuracy of large language models (LLMs) and vision-language models (VLMs) in various applications. Researchers are exploring new approaches to inject skepticism and verify the authenticity of visual inputs, as well as to enhance multimodal reasoning and face anti-spoofing capabilities. Notable developments include the use of reinforcement learning with verifiable rewards, perceptual-evidence anchored reinforced learning, and pessimistic verification methods. These advances have significant implications for improving the trustworthiness and performance of LLMs and VLMs in real-world applications. Noteworthy papers include: Cognitive Inception, which proposes a fully reasoning-based agentic reasoning framework to conduct generalizable authenticity verification. PA-FAS, which enhances reasoning paths by constructing high-quality extended reasoning sequences from limited annotations, improving multimodal reasoning accuracy and cross-domain generalization. HERMES, which introduces a tool-assisted agent that explicitly interleaves informal reasoning with formally verified proof steps, improving reasoning accuracy and reducing computational cost.