The field of human-machine interaction is witnessing a significant shift towards multimodal authentication and safety monitoring systems. Researchers are exploring the fusion of various modalities, such as gaze, periocular images, and physiological signals, to create more robust and reliable systems. These innovations have the potential to enhance user experience, improve safety, and reduce accidents. Notably, the integration of advanced machine learning architectures and real-time data processing is driving the development of more accurate and efficient systems. Additionally, the use of mixed reality interfaces and eye-tracking technology is being investigated for applications in multi-robot cooperation, UX research, and assistive robotic arms. Noteworthy papers include:
- Ocular Authentication: Fusion of Gaze and Periocular Modalities, which proposes a multimodal authentication system that outperforms unimodal systems.
- Dual-sensing driving detection model, which introduces a novel driver fatigue detection method combining computer vision and physiological signal analysis.
- Spot-On: A Mixed Reality Interface for Multi-Robot Cooperation, which presents a novel MR framework for collaborative tasks involving multiple robots.