The field of large language model (LLM) reasoning is moving towards a deeper understanding of the underlying mechanisms that enable these models to solve complex tasks. Recent research has focused on identifying the key factors that contribute to a model's reasoning potential, including its ability to distinguish between sound and unsound knowledge. Additionally, there is a growing interest in developing new training methods and frameworks that can improve the reasoning capabilities of LLMs, such as hierarchical metacognitive reinforcement learning and algorithmic primitives. These advancements have the potential to significantly enhance the performance of LLMs on a wide range of tasks, from mathematical reasoning to natural language understanding. Noteworthy papers in this area include: Soundness-Aware Level, which introduces a microscopic metric to measure a model's ability to distinguish between sound and unsound knowledge. Cog-Rethinker, which proposes a novel hierarchical metacognitive RL framework for LLM reasoning. Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models, which introduces a framework for tracing and steering algorithmic primitives that underlie model reasoning.