Debiasing and Emotional Intelligence in Large Language Models

The field of large language models (LLMs) is moving towards developing more sophisticated and nuanced approaches to debiasing and emotional intelligence. Researchers are exploring the use of metacognitive prompts to improve human decision making and reduce biases in LLMs. Additionally, there is a growing interest in leveraging LLMs to support emotional regulation and co-regulation in various contexts, including parent-neurodivergent child dyads and multi-party social robot interactions.

The emergence of hierarchical emotion organization in LLMs is also being studied, with findings suggesting that these models can internalize aspects of social perception and develop complex emotional hierarchies. However, this also raises concerns about systematic biases in emotion recognition, particularly for underrepresented groups.

Furthermore, researchers are investigating the use of LLMs in mental health applications, including the detection and addressing of family communication bias. This involves developing role-playing LLM-based multi-agent support frameworks that can analyze dialogue and generate feedback to promote psychologically safe family communication.

Noteworthy papers in this area include: Could you be wrong: Debiasing LLMs using a metacognitive prompt for improving human decision making, which demonstrates the effectiveness of metacognitive prompts in reducing biases in LLMs. Towards Emotion Co-regulation with LLM-powered Socially Assistive Robots, which explores the integration of LLMs with socially assistive robots to facilitate emotion co-regulation between parents and neurodivergent children. Emergence of Hierarchical Emotion Organization in Large Language Models, which analyzes the probabilistic dependencies between emotional states in LLM outputs and finds that these models can develop complex hierarchical emotion trees. Role-Playing LLM-Based Multi-Agent Support Framework for Detecting and Addressing Family Communication Bias, which develops a framework for detecting and addressing family communication bias using LLM-based multi-agent dialogue support. Humans are more gullible than LLMs in believing common psychological myths, which investigates whether LLMs mimic human behavior of myth belief and explores methods to mitigate such tendencies.

Sources

Could you be wrong: Debiasing LLMs using a metacognitive prompt for improving human decision making

Towards Emotion Co-regulation with LLM-powered Socially Assistive Robots: Integrating LLM Prompts and Robotic Behaviors to Support Parent-Neurodivergent Child Dyads

Emergence of Hierarchical Emotion Organization in Large Language Models

Exploring User Security and Privacy Attitudes and Concerns Toward the Use of General-Purpose LLM Chatbots for Mental Health

Whom to Respond To? A Transformer-Based Model for Multi-Party Social Robot Interaction

Role-Playing LLM-Based Multi-Agent Support Framework for Detecting and Addressing Family Communication Bias

Humans are more gullible than LLMs in believing common psychological myths

Built with on top of