The field of human-robot interaction and egocentric understanding is moving towards more intuitive and accessible interfaces. Researchers are exploring the use of gaze-guided interaction, wearable devices, and multimodal fusion to enhance the accuracy and robustness of robotic manipulation and object recognition. Notable papers in this area include HRT1, which introduces a novel system for human-to-obot trajectory transfer, and RaycastGrasp, which presents an egocentric gaze-guided robotic manipulation interface. Additionally, papers like Gaze-VLM and GaTector+ are advancing the state-of-the-art in egocentric understanding tasks such as future event prediction and gaze object detection. These innovative approaches are paving the way for more effective and efficient human-robot collaboration.