The field of large language models (LLMs) and chain-of-thought reasoning is moving towards more explainable and transparent methods. Researchers are exploring ways to provide insights into the reasoning process, such as abstracting CoT trajectories into structured latent dynamics and modeling their progression as Markov chains. Another trend is the development of human-in-the-loop systems that enable users to visualize, intervene, and correct the reasoning process, leading to more accurate and trustworthy conclusions. Notable papers in this area include: Explainable Chain-of-Thought Reasoning: An Empirical Analysis on State-Aware Reasoning Dynamics, which introduces a state-aware transition framework to abstract CoT trajectories. Vis-CoT: A Human-in-the-Loop Framework for Interactive Visualization and Intervention in LLM Chain-of-Thought Reasoning, which presents a framework for interactive visualization and intervention in LLM chain-of-thought reasoning, improving final-answer accuracy by up to 24 percentage points.