Explainable Reasoning and Human-in-the-Loop Systems

The field of large language models (LLMs) and chain-of-thought reasoning is moving towards more explainable and transparent methods. Researchers are exploring ways to provide insights into the reasoning process, such as abstracting CoT trajectories into structured latent dynamics and modeling their progression as Markov chains. Another trend is the development of human-in-the-loop systems that enable users to visualize, intervene, and correct the reasoning process, leading to more accurate and trustworthy conclusions. Notable papers in this area include: Explainable Chain-of-Thought Reasoning: An Empirical Analysis on State-Aware Reasoning Dynamics, which introduces a state-aware transition framework to abstract CoT trajectories. Vis-CoT: A Human-in-the-Loop Framework for Interactive Visualization and Intervention in LLM Chain-of-Thought Reasoning, which presents a framework for interactive visualization and intervention in LLM chain-of-thought reasoning, improving final-answer accuracy by up to 24 percentage points.

Sources

Explainable Chain-of-Thought Reasoning: An Empirical Analysis on State-Aware Reasoning Dynamics

Vis-CoT: A Human-in-the-Loop Framework for Interactive Visualization and Intervention in LLM Chain-of-Thought Reasoning

DTMC Model Checking by Path Abstraction Revisited (extended version)

Designing a Lightweight GenAI Interface for Visual Data Analysis

OPRA-Vis: Visual Analytics System to Assist Organization-Public Relationship Assessment with Large Language Models

GlyphWeaver: Unlocking Glyph Design Creativity with Uniform Glyph DSL and AI

Built with on top of