Tactile Robotics and Human-Machine Interaction

The field of tactile robotics is rapidly advancing, with a growing focus on developing robots that can perceive and interact with their environment through touch. Recent research has explored various approaches to tactile sensing, including the use of piezoresistive, piezoelectric, capacitive, magnetic, and optical sensors. These advances have been supported by the development of simulation tools and algorithms to interpret and improve the utility of tactile data. The integration of tactile sensing with other modalities, such as vision, is also becoming increasingly important. Furthermore, there is a growing interest in human-machine interaction, with researchers exploring new methods for gesture recognition, sign language translation, and human-robot collaboration. Notable papers in this area include: Tactile Gesture Recognition with Built-in Joint Sensors for Industrial Robots, which demonstrates the feasibility of external-sensor-free tactile recognition. Grab-n-Go, which introduces a wearable device that leverages active acoustic sensing to recognize subtle hand microgestures while holding various objects. Foundation Model for Skeleton-Based Human Action Understanding, which presents a unified framework for skeleton-based human action understanding. MaskSem, which introduces a novel semantic-guided masking method for learning 3D hybrid high-order motion representations. HandCraft, which addresses the challenge of limited data in sign language recognition by introducing a novel sign generation model. BioSonix, which presents a novel approach to augment tool navigation in mixed reality environments by providing auditory representations of tool-tissue dynamics.

Sources

Tactile Robotics: An Outlook

Grab-n-Go: On-the-Go Microgesture Recognition with Objects in Hand

Tactile Gesture Recognition with Built-in Joint Sensors for Industrial Robots

Foundation Model for Skeleton-Based Human Action Understanding

Real-Time Sign Language Gestures to Speech Transcription using Deep Learning

MaskSem: Semantic-Guided Masking for Learning 3D Hybrid High-Order Motion Representation

HandCraft: Dynamic Sign Generation for Synthetic Data Augmentation

Towards Skeletal and Signer Noise Reduction in Sign Language Production via Quaternion-Based Pose Encoding and Contrastive Learning

BioSonix: Can Physics-Based Sonification Perceptualize Tissue Deformations From Tool Interactions?

Built with on top of