The field of human-computer interaction is moving towards more nuanced and personalized emotion recognition and expression. Recent developments have focused on addressing the challenges of high-dimensional and incomplete multi-modal physiological data, as well as the need for more robust and adaptive feature selection methods. Researchers are also exploring new ways to portray emotion in generated sign language and to detect stress from multimodal wearable sensor data. Furthermore, there is a growing interest in developing real-time multimodal emotion estimation systems that can track moment-to-moment emotional states and provide personalized feedback. Noteworthy papers in this area include: ASLSL, which proposes a novel method for incomplete multi-modal physiological signal feature selection, and REFS, which presents a robust EEG feature selection method for missing multi-dimensional emotion recognition. Additionally, the Realtime Multimodal Emotion Estimation system combines neurophysiological and behavioral modalities to track emotional states, and the Stress Detection from Multimodal Wearable Sensor Data study introduces a novel dataset and benchmark for automated stress recognition. These advancements have the potential to improve human-computer interaction, particularly for individuals with severe motor impairments or neurodivergent profiles, and to enable more inclusive and personalized emotion technologies.
Emotion Recognition and Expression in Human-Computer Interaction
Sources
ASLSL: Adaptive shared latent structure learning with incomplete multi-modal physiological data for multi-dimensional emotional feature selection
REFS: Robust EEG feature selection with missing multi-dimensional annotation for emotion recognition