The field of multimodal large language models (MLLMs) is rapidly advancing, with a focus on integrating external tools and multimodal perception to improve reasoning capabilities. Recent developments have introduced agentic frameworks that unify global planning with local multimodal perception, enabling MLLMs to flexibly and efficiently utilize external tools during reasoning. These frameworks have demonstrated strong generalization capabilities and improved performance in various tasks, including visual compliance verification and real-time assistance in augmented reality. Notably, the incorporation of multimodal context and human-AI co-embodied intelligence has shown significant potential in enabling adaptive and intuitive assistance for industrial training and scientific experimentation. Some noteworthy papers include: ToolScope, which introduces an agentic framework for vision-guided and long-horizon tool use, achieving an average performance improvement of up to +6.69% across all datasets. CompAgent, which proposes an agentic framework for visual compliance verification, achieving up to 76% F1 score and a 10% improvement over the state-of-the-art on the UnsafeBench dataset.