The field of mixed reality is rapidly advancing, with a focus on improving human-machine interaction. Researchers are exploring new methods for gaze estimation, facial motion capture, and group interaction sensing. These innovations have the potential to enable more seamless and intuitive interactions in mixed reality environments. Notably, uncertainty-aware approaches are being developed to address challenges such as motion blur and eyelid occlusion. Additionally, studies are highlighting the importance of considering privacy concerns related to facial motion data. Some noteworthy papers include: EyeSeg, which introduces an uncertainty-aware eye segmentation framework for AR/VR. FacialMotionID, which demonstrates the potential for identifying users and inferring emotional states from facial motion data in mixed reality environments.