Semantic Information and Gestures in Human-Computer Interaction

The field of human-computer interaction is moving towards a more nuanced understanding of the role of semantic information and gestures in facilitating effective communication. Recent research has highlighted the importance of considering the context and user needs in the development of robotic systems and virtual agents. The use of semantic information and gestures has been shown to improve situational awareness, convey meaning, and facilitate turn-taking in conversation. Notably, the integration of image analysis and semantic matching has enabled the generation of iconic and deictic gestures that are semantically coherent with verbal utterances. Noteworthy papers include: ImaGGen, which introduces a zero-shot system for generating co-speech semantic gestures grounded in language and image input. Modeling Turn-Taking with Semantically Informed Gestures, which demonstrates the complementary role of semantically guided gestures in multimodal turn-taking.

Sources

First Responders' Perceptions of Semantic Information for Situational Awareness in Robot-Assisted Emergency Response

Conveying Meaning through Gestures: An Investigation into Semantic Co-Speech Gesture Generation

ImaGGen: Zero-Shot Generation of Co-Speech Semantic Gestures Grounded in Language and Image Input

Modeling Turn-Taking with Semantically Informed Gestures

Built with on top of