The field of large language models (LLMs) is moving towards more efficient and effective reasoning and retrieval capabilities. Recent developments have focused on improving the ability of LLMs to reason and retrieve information in a more accurate and efficient manner. This includes the use of hybrid thinking, which enables LLMs to switch between reasoning and direct answering, and the development of new frameworks for generative retrieval, such as Retrieval-in-the-Chain and LLM-guided Hierarchical Retrieval. Additionally, there is a growing interest in using LLMs for zero-shot learning and demographic reasoning. Noteworthy papers in this area include Unilaw-R1, which introduces a large language model tailored for legal reasoning, and ZeroGR, which proposes a zero-shot generative retrieval framework that leverages natural language instructions to extend GR across a wide range of IR tasks. Overall, the field is moving towards more advanced and specialized LLMs that can handle complex tasks and provide more accurate and efficient results.