The field of human motion capture and tracking is moving towards more innovative and privacy-preserving methods. Researchers are exploring the use of pressure signals and tactile interactions to capture human motion, eliminating the need for specialized lighting setups, cameras, or wearable devices. This approach has shown promising results in multi-person scenarios, enabling the recovery of global human meshes and accurate tracking of individuals. Another area of focus is the development of real-time systems for controlling avatar locomotion, allowing for more expressive and immediate interaction. The use of deep learning frameworks and multi-view datasets is also becoming increasingly popular for pedestrian detection and tracking, particularly in complex environments. Noteworthy papers include: Pressure2Motion, which pioneers the use of pressure data and linguistic priors for motion generation. TouchWalker, which enables real-time avatar locomotion control using finger-walking gestures on a touchscreen. PressTrack-HMR, which recovers multi-person global human meshes solely from pressure signals. MATRIX, which introduces a comprehensive dataset and deep learning framework for multi-view pedestrian detection and tracking.