Advances in Human-Centered AI

The field of artificial intelligence is moving towards a more human-centered approach, with a focus on developing models that can understand and align with human psychological concepts, values, and emotions. Recent studies have explored the use of large language models (LLMs) in various applications, including symptom checking, psycholinguistic analysis, and suicide risk detection. These models have shown promising results in capturing nuances of human language and behavior, but also reveal limitations in their ability to internalize human psychological concepts and exhibit empathetic behaviors.

Noteworthy papers in this area include:

  • A study on evaluating the alignment of LLMs with human ratings on psycholinguistic word features, which highlights the potential limitations of current LLMs in aligning with human sensory associations for words.
  • A framework for mitigating gambling-like risk-taking behaviors in LLMs, which proposes a risk-aware response generation approach to address behavioral biases in AI systems.
  • A study on leveraging LLMs for spontaneous speech-based suicide risk detection, which demonstrates the potential of LLM-based methods for analyzing speech in the context of suicide risk assessment.
  • A design for conversational agents as scalable, cooperative patient simulators for palliative-care training, which highlights the potential of LLMs in supporting emotional labor and cooperative learning in high-stakes care settings.

Sources

How to Evaluate the Accuracy of Online and AI-Based Symptom Checkers: A Standardized Methodological Framework

Psycholinguistic Word Features: a New Approach for the Evaluation of LLMs Alignment with Humans

Mitigating Gambling-Like Risk-Taking Behaviors in Large Language Models: A Behavioral Economics Approach to AI Safety

In-context learning for the classification of manipulation techniques in phishing emails

Measuring How LLMs Internalize Human Psychological Concepts: A preliminary analysis

Examining Reject Relations in Stimulus Equivalence Simulations

Leveraging Large Language Models for Spontaneous Speech-Based Suicide Risk Detection

PAL: Designing Conversational Agents as Scalable, Cooperative Patient Simulators for Palliative-Care Training

Are You Listening to Me? Fine-Tuning Chatbots for Empathetic Dialogue

Who's Sorry Now: User Preferences Among Rote, Empathic, and Explanatory Apologies from LLM Chatbots

Built with on top of