The field of human-computer interaction is moving towards more personalized and naturalistic approaches to gesture recognition and generation. Researchers are exploring the use of reinforcement learning and imitation learning to create more human-like gestures in embodied agents, such as robots. This includes the development of contextual bandit algorithms for personalizing hand gesture recognition, as well as the use of motion capture data to generate pointing gestures. The evaluation of gestures in virtual reality is also becoming increasingly important, as it provides a more immersive and realistic environment for human-computer interaction. Noteworthy papers include:
- A Contextual Bandits Approach for Personalization of Hand Gesture Recognition, which proposes a calibrationless longitudinal personalization method using a contextual multi-arm bandit algorithm.
- Learning to Generate Pointing Gestures in Situated Embodied Conversational Agents, which presents a framework for generating pointing gestures in embodied agents by combining imitation and reinforcement learning.