The field of large language models (LLMs) is shifting towards adaptive reasoning and cognitive architectures, with a focus on developing models that can allocate reasoning effort based on input characteristics such as difficulty and uncertainty. This move is driven by the recognition that current LLMs often apply uniform reasoning strategies regardless of task complexity, leading to inefficiencies and limitations in their performance. Recent work has formalized adaptive reasoning as a control-augmented policy optimization problem, and proposed systematic taxonomies for organizing existing methods. Noteworthy papers in this area include 'From Efficiency to Adaptivity: A Deeper Look at Adaptive Reasoning in Large Language Models', which reframes reasoning through the lens of adaptivity, and 'Cognitive Foundations for Reasoning and Their Manifestation in LLMs', which proposes a fine-grained cognitive evaluation framework for analyzing the behavioral manifestations of cognitive elements in LLMs. Other notable papers, such as 'STaR: Towards Cognitive Table Reasoning via Slow-Thinking Large Language Models' and 'Experience-Guided Adaptation of Inference-Time Reasoning Strategies', demonstrate the potential of slow-thinking capabilities and experience-guided adaptation for improving the performance and reliability of LLMs.