Advancements in Adaptive Reasoning and Cognitive Architectures for Large Language Models

The field of large language models (LLMs) is shifting towards adaptive reasoning and cognitive architectures, with a focus on developing models that can allocate reasoning effort based on input characteristics such as difficulty and uncertainty. This move is driven by the recognition that current LLMs often apply uniform reasoning strategies regardless of task complexity, leading to inefficiencies and limitations in their performance. Recent work has formalized adaptive reasoning as a control-augmented policy optimization problem, and proposed systematic taxonomies for organizing existing methods. Noteworthy papers in this area include 'From Efficiency to Adaptivity: A Deeper Look at Adaptive Reasoning in Large Language Models', which reframes reasoning through the lens of adaptivity, and 'Cognitive Foundations for Reasoning and Their Manifestation in LLMs', which proposes a fine-grained cognitive evaluation framework for analyzing the behavioral manifestations of cognitive elements in LLMs. Other notable papers, such as 'STaR: Towards Cognitive Table Reasoning via Slow-Thinking Large Language Models' and 'Experience-Guided Adaptation of Inference-Time Reasoning Strategies', demonstrate the potential of slow-thinking capabilities and experience-guided adaptation for improving the performance and reliability of LLMs.

Sources

From Efficiency to Adaptivity: A Deeper Look at Adaptive Reasoning in Large Language Models

Studies with impossible languages falsify LMs as models of human language

ReTrace: Interactive Visualizations for Reasoning Traces of Large Reasoning Models

STaR: Towards Cognitive Table Reasoning via Slow-Thinking Large Language Models

Experience-Guided Adaptation of Inference-Time Reasoning Strategies

Reasoning: From Reflection to Solution

On the Notion that Language Models Reason

Do LLMs and Humans Find the Same Questions Difficult? A Case Study on Japanese Quiz Answering

On the Brittleness of LLMs: A Journey around Set Membership

The Illusion of Procedural Reasoning: Measuring Long-Horizon FSM Execution in LLMs

DEVAL: A Framework for Evaluating and Improving the Derivation Capability of Large Language Models

ProRAC: A Neuro-symbolic Method for Reasoning about Actions with LLM-based Progression

DesignerlyLoop: Bridging the Cognitive Gap through Visual Node-Based Reasoning in Human-AI Collaborative Design

From generative AI to the brain: five takeaways

Cognitive Foundations for Reasoning and Their Manifestation in LLMs

Built with on top of