Advances in Large Language Models for Decision Making and Ambiguity Detection

The field of natural language processing is witnessing significant developments in the application of Large Language Models (LLMs) to decision-making processes and ambiguity detection. Researchers are exploring the potential of LLMs to simulate group decision-making, detect agreement among participant agents, and improve the efficiency of debates. Moreover, LLMs are being used to address ambiguity in user requests, instructions, and intentions, which is crucial for safety-critical applications such as collaborative surgical robots. The use of multi-agent debate frameworks and ensemble methods is also being investigated to enhance the performance of LLMs in ambiguity detection and resolution. Noteworthy papers in this area include: Finding Common Ground, which presents a novel LLM-based multi-agent system for detecting agreement in decision conferences. Beyond Single Models, which introduces a multi-agent debate framework to enhance LLM detection of ambiguity in requests. MAD-Spear, which highlights the security vulnerabilities of multi-agent debate systems and proposes a conformity-driven prompt injection attack.

Sources

Finding Common Ground: Using Large Language Models to Detect Agreement in Multi-Agent Decision Conferences

DS@GT at Touch\'e: Large Language Models for Retrieval-Augmented Debate

Referential ambiguity and clarification requests: comparing human and LLM behaviour

LLM-based ambiguity detection in natural language instructions for collaborative surgical robots

Beyond Single Models: Enhancing LLM Detection of Ambiguity in Requests through Debate

LLM-Based Config Synthesis requires Disambiguation

MAD-Spear: A Conformity-Driven Prompt Injection Attack on Multi-Agent Debate Systems

Built with on top of