Advancements in Human-Centered AI and Multimodal Interaction

The field of human-centered AI and multimodal interaction is rapidly evolving, with a focus on creating more immersive, interactive, and emotionally intelligent systems. Recent developments have seen a surge in innovative approaches to emotion analysis, human-robot interaction, and multimodal data processing. Researchers are exploring new methods for emotion recognition, such as using geometric animations to establish a correspondence between discrete emotion labels and continuous valence-arousal-dominance spaces. Additionally, there is a growing interest in designing systems that can adapt to diverse physical scenes and provide realistic acoustic rendering, enhancing the overall user experience. Noteworthy papers in this area include the introduction of EmoVid, a multimodal emotion-annotated video dataset, and the development of SAMOSA, a novel on-device system for spatially accurate sound rendering. These advancements have the potential to revolutionize various applications, from virtual reality and education to entertainment and healthcare. Notable papers include EmoVid, which establishes a new benchmark for affective video computing, and SAMOSA, which enables efficient acoustic calibration via scene priors.

Sources

Graph Neural Field with Spatial-Correlation Augmentation for HRTF Personalization

EmoVid: A Multimodal Emotion Video Dataset for Emotion-Centric Video Understanding and Generation

Enhancing XR Auditory Realism via Multimodal Scene-Aware Acoustic Rendering

A Proxy-Based Method for Mapping Discrete Emotions onto VAD model

Hi-Reco: High-Fidelity Real-Time Conversational Digital Humans

EmoVerse: A MLLMs-Driven Emotion Representation Dataset for Interpretable Visual Emotion Analysis

Designing-with More-than-Human Through Human Augmentation

Simple Lines, Big Ideas: Towards Interpretable Assessment of Human Creativity from Drawings

Towards Affect-Adaptive Human-Robot Interaction: A Protocol for Multimodal Dataset Collection on Social Anxiety

CreBench: Human-Aligned Creativity Evaluation from Idea to Process to Product

Gamified Virtual Reality Exposure Therapy for Mysophobia: Evaluating the Efficacy of a Simulated Sneeze Intervention

TailCue: Exploring Animal-inspired Robotic Tail for Automated Vehicles Interaction

Towards Authentic Movie Dubbing with Retrieve-Augmented Director-Actor Interaction Learning

Painted Heart Beats

PresentCoach: Dual-Agent Presentation Coaching through Exemplars and Interactive Feedback

NAMeGEn: Creative Name Generation via A Novel Agent-based Multiple Personalized Goal Enhancement Framework

The Role of Consequential and Functional Sound in Human-Robot Interaction: Toward Audio Augmented Reality Interfaces

Panel-by-Panel Souls: A Performative Workflow for Expressive Faces in AI-Assisted Manga Creation

Built with on top of