Advancements in Expressive Robotics and Human-Robot Interaction

The field of expressive robotics and human-robot interaction is moving towards a more nuanced and empathetic approach, with a focus on developing robots that can adapt to different social and physical environments. Recent developments have highlighted the importance of auditory and tactile cues in creating a more immersive and realistic experience for users. Notably, researchers are exploring the use of AI-powered sentiment analysis and machine learning algorithms to create more expressive and context-appropriate speech synthesis. Furthermore, the development of lightweight and customizable text-to-speech toolkits is enabling social robots to convey a range of emotions and engage with users in a more human-like way. Overall, these advancements have the potential to improve the effectiveness of social robots in various applications, including language teaching and clinical education. Some noteworthy papers in this area include:

  • EmoNews, which presents a spoken dialogue system for expressive news conversations that utilizes a large language model-based sentiment analyzer to identify appropriate emotions.
  • EmojiVoice, which introduces a free, customizable text-to-speech toolkit that allows social roboticists to build temporally variable, expressive speech on social robots.
  • I Know You're Listening, which addresses the need for a lightweight and expressive robot voice and explores how to adapt a robot's voice to physical and social ambient environments.

Sources

Auditory-Tactile Congruence for Synthesis of Adaptive Pain Expressions in RoboPatients

Palpation Alters Auditory Pain Expressions with Gender-Specific Variations in Robopatients

EmoNews: A Spoken Dialogue System for Expressive News Conversations

EmojiVoice: Towards long-term controllable expressivity in robot speech

I Know You're Listening: Adaptive Voice for HRI

Built with on top of