Emotion Recognition and Expression in Human-Computer Interaction

The field of human-computer interaction is moving towards more nuanced and personalized emotion recognition and expression. Recent developments have focused on addressing the challenges of high-dimensional and incomplete multi-modal physiological data, as well as the need for more robust and adaptive feature selection methods. Researchers are also exploring new ways to portray emotion in generated sign language and to detect stress from multimodal wearable sensor data. Furthermore, there is a growing interest in developing real-time multimodal emotion estimation systems that can track moment-to-moment emotional states and provide personalized feedback. Noteworthy papers in this area include: ASLSL, which proposes a novel method for incomplete multi-modal physiological signal feature selection, and REFS, which presents a robust EEG feature selection method for missing multi-dimensional emotion recognition. Additionally, the Realtime Multimodal Emotion Estimation system combines neurophysiological and behavioral modalities to track emotional states, and the Stress Detection from Multimodal Wearable Sensor Data study introduces a novel dataset and benchmark for automated stress recognition. These advancements have the potential to improve human-computer interaction, particularly for individuals with severe motor impairments or neurodivergent profiles, and to enable more inclusive and personalized emotion technologies.

Sources

ASLSL: Adaptive shared latent structure learning with incomplete multi-modal physiological data for multi-dimensional emotional feature selection

REFS: Robust EEG feature selection with missing multi-dimensional annotation for emotion recognition

Challenges and opportunities in portraying emotion in generated sign language

9th Workshop on Sign Language Translation and Avatar Technologies (SLTAT 2025)

Blink-to-code: real-time Morse code communication via eye blink detection and classification

Realtime Multimodal Emotion Estimation using Behavioral and Neurophysiological Data

Stress Detection from Multimodal Wearable Sensor Data

Reproducible Physiological Features in Affective Computing: A Preliminary Analysis on Arousal Modeling

Differential Physiological Responses to Proxemic and Facial Threats in Virtual Avatar Interactions

Built with on top of