The field of human-computer interaction is moving towards a more nuanced understanding of the role of semantic information and gestures in facilitating effective communication. Recent research has highlighted the importance of considering the context and user needs in the development of robotic systems and virtual agents. The use of semantic information and gestures has been shown to improve situational awareness, convey meaning, and facilitate turn-taking in conversation. Notably, the integration of image analysis and semantic matching has enabled the generation of iconic and deictic gestures that are semantically coherent with verbal utterances. Noteworthy papers include: ImaGGen, which introduces a zero-shot system for generating co-speech semantic gestures grounded in language and image input. Modeling Turn-Taking with Semantically Informed Gestures, which demonstrates the complementary role of semantically guided gestures in multimodal turn-taking.