The field of agentic large language models is rapidly evolving, with a growing focus on developing models that can interact with their environment, use external tools, and reason about complex tasks. Recent research has explored the use of agentic models in various domains, including mathematical problem-solving, natural language interfaces, and goal-oriented tasks. These models have shown significant improvements in performance and efficiency, with some achieving state-of-the-art results on challenging benchmarks. Notably, the development of frameworks such as GOAT and A^2FM has enabled the creation of more robust and efficient agentic models. Furthermore, research has also investigated the use of natural language tools, taxonomy-based solutions, and entropy-balanced policy optimization to improve the performance of agentic models. Overall, the field is moving towards the development of more advanced and generalizable agentic models that can be applied to a wide range of tasks and domains. Noteworthy papers include: A^2FM, which presents a unified framework for tool-aware hybrid reasoning, and GOAT, which enables fine-tuning of LLM agents in a human annotation-free setting. Demystifying Reinforcement Learning in Agentic Reasoning is also notable for its comprehensive investigation into the key design principles and optimal practices for agentic RL.