The field of artificial intelligence is witnessing significant advancements in the development of multimodal large language models (MLLMs) and tool-augmented AI systems. Recent research has focused on enhancing the capabilities of MLLMs by integrating external tools, such as APIs, expert models, and knowledge bases, to improve their performance on complex tasks. This approach has shown promise in overcoming the limitations of MLLMs, including poor performance on downstream tasks and inadequate evaluation protocols. The use of external tools has also enabled MLLMs to acquire and annotate high-quality multimodal data, improve their performance on challenging tasks, and enable comprehensive and accurate evaluation. Noteworthy papers in this area include Empowering Multimodal LLMs with External Tools, which presents a comprehensive survey on leveraging external tools to enhance MLLM performance, and MCP-Universe, which introduces a benchmark for evaluating LLMs in realistic and hard tasks through interaction with real-world Model Context Protocol servers. Additionally, papers like LiveMCP-101 and Dissecting Tool-Integrated Reasoning have highlighted the importance of tool-integrated reasoning and the need for more rigorous evaluation of AI agents in real-world scenarios.