The field of artificial intelligence is moving towards a more human-centered approach, with a focus on developing models that can understand and align with human psychological concepts, values, and emotions. Recent studies have explored the use of large language models (LLMs) in various applications, including symptom checking, psycholinguistic analysis, and suicide risk detection. These models have shown promising results in capturing nuances of human language and behavior, but also reveal limitations in their ability to internalize human psychological concepts and exhibit empathetic behaviors.
Noteworthy papers in this area include:
- A study on evaluating the alignment of LLMs with human ratings on psycholinguistic word features, which highlights the potential limitations of current LLMs in aligning with human sensory associations for words.
- A framework for mitigating gambling-like risk-taking behaviors in LLMs, which proposes a risk-aware response generation approach to address behavioral biases in AI systems.
- A study on leveraging LLMs for spontaneous speech-based suicide risk detection, which demonstrates the potential of LLM-based methods for analyzing speech in the context of suicide risk assessment.
- A design for conversational agents as scalable, cooperative patient simulators for palliative-care training, which highlights the potential of LLMs in supporting emotional labor and cooperative learning in high-stakes care settings.