The field of large language models (LLMs) is rapidly advancing, with a focus on improving reasoning and information retrieval capabilities. Recent research has highlighted the importance of explicit reasoning in LLMs, with studies demonstrating that including explicit reasoning consistently improves answer quality across diverse domains. Additionally, attention analysis has revealed that answer tokens attend substantially to reasoning tokens, with certain mid-layer Reasoning-Focus Heads closely tracking the reasoning trajectory. Mechanistic interventions have also been used to assess the dependence of answer tokens on reasoning activations, confirming a directional and functional flow of information from reasoning to answer. Furthermore, novel fine-tuning frameworks have been introduced to promote reasoning-dominant behavior and enhance generalizable reasoning capabilities. Noteworthy papers in this area include: Retro*, which proposes a novel approach for reasoning-intensive document retrieval, introducing a rubric-based relevance scoring mechanism and achieving state-of-the-art performance on the BRIGHT benchmark. Latent Thinking Optimization, which provides a systematic study of how LLMs think in the latent space and proposes a probabilistic algorithm to optimize the latent thinking processes, demonstrating significant improvements in reasoning tasks. DecepChain, which presents a novel backdoor attack paradigm that steers models to generate reasoning that appears benign while yielding incorrect conclusions, highlighting an urgent but underexplored risk in LLMs.