The field of artificial intelligence is witnessing significant advancements in the development of large language models (LLMs) and agentic systems. Recent research has focused on enhancing the capabilities of LLMs, enabling them to interact with their environment, reason, and make decisions autonomously. The integration of tools and external knowledge has emerged as a key area of research, with studies demonstrating the benefits of in-tool learning over in-weight learning for factual recall. Furthermore, the development of agentic frameworks and architectures has enabled the creation of more sophisticated and adaptive systems. Noteworthy papers in this area include 'IR-Agent' and 'rStar2-Agent', which demonstrate the potential of LLMs in molecular structure elucidation and math reasoning, respectively. Overall, the field is moving towards the development of more advanced and autonomous systems, with significant implications for various applications and industries.