The field of reasoning language models is moving towards a deeper understanding of the cognitive habits and decision-making processes of these models. Researchers are exploring the use of benchmarks and evaluation frameworks to assess the cognitive habits of large reasoning models, and are developing new methods for predicting and controlling the thinking time of these models. Additionally, there is a growing interest in interactive reasoning and visualizing chain-of-thought outputs to improve the transparency and interpretability of these models. Notable papers in this area include: Towards Understanding the Cognitive Habits of Large Reasoning Models, which introduces a principled benchmark for evaluating cognitive habits in large reasoning models. Thinking About Thinking: SAGE-nano's Inverse Reasoning for Self-Aware Language Models, which proposes a novel paradigm for inverse reasoning in large language models, enabling them to decompose and explain their own reasoning chains post-hoc.