Strategic Reasoning and Emotional Intelligence in Large Language Models

The field of large language models (LLMs) is moving towards developing more advanced strategic reasoning and emotional intelligence capabilities. Recent research has focused on evaluating LLMs' ability to form coherent beliefs, make strategic decisions, and exhibit emotional expression. Studies have shown that LLMs can display belief-coherent best-response behavior, meta-reasoning, and novel heuristic formation, indicating a structured basis for the study of strategic cognition in artificial agents. Additionally, research has explored the development of emotional intelligence in LLMs, including the discovery and control of emotion circuits, which can be harnessed for universal emotion control. Noteworthy papers include: LLMs as Strategic Agents: Beliefs, Best Response Behavior, and Emergent Heuristics, which develops a framework to identify strategic thinking in LLMs. Do LLMs 'Feel'? Emotion Circuits Discovery and Control, which constructs a controlled dataset to elicit comparable internal states across emotions and achieves 99.65% emotion-expression accuracy on the test set.

Sources

LLMs as Strategic Agents: Beliefs, Best Response Behavior, and Emergent Heuristics

Evaluating Language Models' Evaluations of Games

Do LLMs "Feel"? Emotion Circuits Discovery and Control

Beyond Survival: Evaluating LLMs in Social Deduction Games with Human-Aligned Strategies

AI Agents for the Dhumbal Card Game: A Comparative Study

Scheming Ability in LLM-to-LLM Strategic Interactions

Do You Get the Hint? Benchmarking LLMs on the Board Game Concept

Doing Things with Words: Rethinking Theory of Mind Simulation in Large Language Models

TextBandit: Evaluating Probabilistic Reasoning in LLMs Through Language-Only Decision Tasks

Built with on top of