The field of large language models is moving towards more efficient and effective reasoning and search methods. Recent developments have focused on improving the accuracy and reducing the computational cost of these methods. One key direction is the use of dual-phase search frameworks that separate reasoning into planning and execution phases, allowing for more efficient exploration of the reasoning process. Another area of research is the development of new tree search algorithms that can effectively utilize large test-time budgets to boost reliability. These advancements have the potential to significantly improve the performance of large language models in various tasks such as mathematical reasoning and code generation. Noteworthy papers include: Adaptive Test-Time Reasoning via Reward-Guided Dual-Phase Search, which proposes a dual-phase test-time scaling framework that improves accuracy while reducing redundant computation. Lateral Tree-of-Thoughts Surpasses ToT by Incorporating Logically-Consistent, Low-Utility Candidates, which introduces a drop-in controller that separates utility from logical consistency and treats low-utility but consistent candidates as assets. Bifurcation: How to Explore a Tree, which improves the efficiency of tree exploration algorithms.