The field of human activity recognition and sensor technologies is rapidly advancing, with a focus on developing more accurate, robust, and generalizable models. Recent research has explored the use of deep learning techniques, such as reinforcement learning and graph neural networks, to improve the performance of activity recognition systems. Additionally, there is a growing interest in leveraging multimodal sensing and fusion techniques to enhance the accuracy and reliability of these systems. Notable papers in this area include: EZhouNet, which proposes a graph neural network-based framework for respiratory sound event detection, achieving improved flexibility and applicability. Reinforcement Learning Driven Generalizable Feature Representation for Cross-User Activity Recognition, which introduces a novel framework that leverages reinforcement learning to learn user-invariant activity dynamics, achieving superior accuracy without per-user calibration. WatchHAR, which presents a real-time on-device human activity recognition system for smartwatches, achieving over 90% accuracy across more than 25 activity classes while addressing privacy and latency issues. COBRA, which proposes a multimodal sensing deep learning framework for remote chronic obesity management via wrist-worn activity monitoring, demonstrating high performance across multiple architectures. i-Mask, which presents a novel approach for breath-driven activity recognition using a custom-developed mask equipped with integrated sensors, achieving over 95% accuracy and highlighting its potential in healthcare and fitness applications.
Advances in Human Activity Recognition and Sensor Technologies
Sources
EZhouNet:A framework based on graph neural network and anchor interval for the respiratory sound event detection
Reinforcement Learning Driven Generalizable Feature Representation for Cross-User Activity Recognition