The field of artificial intelligence is witnessing significant developments in the area of agentic tool use and large language models. Researchers are focusing on enhancing the capabilities of large language models to use external tools effectively, which is critical for their transition from text generators to reasoning agents. The introduction of expansive, real-world benchmarks and novel reinforcement learning frameworks is addressing the limitations of existing evaluation methods and enabling the development of more advanced models. Notably, the integration of user belief modeling and multi-turn interactions is improving the performance of dialogue systems and agentic tool use. Furthermore, the creation of contextual multi-agent learning frameworks is enhancing the generalization and sample efficiency of models in complex, real-world scenarios. Noteworthy papers include: MCPVerse, which introduces an expansive, real-world benchmark for evaluating agentic tool use, and MUA-RL, which proposes a novel reinforcement learning framework for multi-turn user-interacting agent reinforcement learning. Dream to Chat is also noteworthy for its application of model-based reinforcement learning on dialogues with user belief modeling.