Advances in Human Activity Recognition and Sensor Technologies

The field of human activity recognition and sensor technologies is rapidly advancing, with a focus on developing more accurate, robust, and generalizable models. Recent research has explored the use of deep learning techniques, such as reinforcement learning and graph neural networks, to improve the performance of activity recognition systems. Additionally, there is a growing interest in leveraging multimodal sensing and fusion techniques to enhance the accuracy and reliability of these systems. Notable papers in this area include: EZhouNet, which proposes a graph neural network-based framework for respiratory sound event detection, achieving improved flexibility and applicability. Reinforcement Learning Driven Generalizable Feature Representation for Cross-User Activity Recognition, which introduces a novel framework that leverages reinforcement learning to learn user-invariant activity dynamics, achieving superior accuracy without per-user calibration. WatchHAR, which presents a real-time on-device human activity recognition system for smartwatches, achieving over 90% accuracy across more than 25 activity classes while addressing privacy and latency issues. COBRA, which proposes a multimodal sensing deep learning framework for remote chronic obesity management via wrist-worn activity monitoring, demonstrating high performance across multiple architectures. i-Mask, which presents a novel approach for breath-driven activity recognition using a custom-developed mask equipped with integrated sensors, achieving over 95% accuracy and highlighting its potential in healthcare and fitness applications.

Sources

Speech Foundation Models Generalize to Time Series Tasks from Wearable Sensor Data

EZhouNet:A framework based on graph neural network and anchor interval for the respiratory sound event detection

Reinforcement Learning Driven Generalizable Feature Representation for Cross-User Activity Recognition

Enhancing Fitness Movement Recognition with Attention Mechanism and Pre-Trained Feature Extractors

COBRA: Multimodal Sensing Deep Learning Framework for Remote Chronic Obesity Management via Wrist-Worn Activity Monitoring

i-Mask: An Intelligent Mask for Breath-Driven Activity Recognition

WatchHAR: Real-time On-device Human Activity Recognition System for Smartwatches

VCO-CARE: VCO-based Calibration-free Analog Readout for Electrodermal activity sensing

Ensemble Distribution Distillation for Self-Supervised Human Activity Recognition

Classification of 24-hour movement behaviors from wrist-worn accelerometer data: from handcrafted features to deep learning techniques

Built with on top of