The field of natural language processing is witnessing significant developments in the application of Large Language Models (LLMs) to decision-making processes and ambiguity detection. Researchers are exploring the potential of LLMs to simulate group decision-making, detect agreement among participant agents, and improve the efficiency of debates. Moreover, LLMs are being used to address ambiguity in user requests, instructions, and intentions, which is crucial for safety-critical applications such as collaborative surgical robots. The use of multi-agent debate frameworks and ensemble methods is also being investigated to enhance the performance of LLMs in ambiguity detection and resolution. Noteworthy papers in this area include: Finding Common Ground, which presents a novel LLM-based multi-agent system for detecting agreement in decision conferences. Beyond Single Models, which introduces a multi-agent debate framework to enhance LLM detection of ambiguity in requests. MAD-Spear, which highlights the security vulnerabilities of multi-agent debate systems and proposes a conformity-driven prompt injection attack.