The field of Human-Computer Interaction (HCI) and sensing technologies is rapidly evolving, with a focus on developing innovative solutions for real-world problems. Recent research has explored the use of multimodal sensing, machine learning, and data-driven approaches to improve human-computer interaction, fatigue detection, and stress management. Notably, the integration of physiological signals, such as heart rate, skin temperature, and respiration rate, has shown promising results in estimating anxiety, detecting fatigue, and enhancing mindfulness skills. Furthermore, the development of notification-based interventions and personalized music interventions has demonstrated potential in reducing smartphone overuse and improving mental well-being. Overall, these advancements have the potential to transform various aspects of human life, from healthcare and education to transportation and entertainment.
Noteworthy papers include: The Transformer-Based Framework for Motion Capture Denoising and Anomaly Detection in Medical Rehabilitation, which proposes an end-to-end deep learning framework for enhancing medical rehabilitation. The UL-DD: A Multimodal Drowsiness Dataset Using Video, Biometric Signals, and Behavioral Data, which presents a comprehensive public dataset for driver drowsiness detection. The Improving Out-of-distribution Human Activity Recognition via IMU-Video Cross-modal Representation Learning, which proposes a new cross-modal self-supervised pretraining approach for improving human activity recognition.