The field of human-centered AI and multimodal interaction is rapidly evolving, with a focus on creating more immersive, interactive, and emotionally intelligent systems. Recent developments have seen a surge in innovative approaches to emotion analysis, human-robot interaction, and multimodal data processing. Researchers are exploring new methods for emotion recognition, such as using geometric animations to establish a correspondence between discrete emotion labels and continuous valence-arousal-dominance spaces. Additionally, there is a growing interest in designing systems that can adapt to diverse physical scenes and provide realistic acoustic rendering, enhancing the overall user experience. Noteworthy papers in this area include the introduction of EmoVid, a multimodal emotion-annotated video dataset, and the development of SAMOSA, a novel on-device system for spatially accurate sound rendering. These advancements have the potential to revolutionize various applications, from virtual reality and education to entertainment and healthcare. Notable papers include EmoVid, which establishes a new benchmark for affective video computing, and SAMOSA, which enables efficient acoustic calibration via scene priors.
Advancements in Human-Centered AI and Multimodal Interaction
Sources
Towards Affect-Adaptive Human-Robot Interaction: A Protocol for Multimodal Dataset Collection on Social Anxiety
Gamified Virtual Reality Exposure Therapy for Mysophobia: Evaluating the Efficacy of a Simulated Sneeze Intervention
NAMeGEn: Creative Name Generation via A Novel Agent-based Multiple Personalized Goal Enhancement Framework