The field of human reconstruction and rendering is moving towards more accurate and realistic representations of the human body. Recent developments have focused on improving the fidelity of gaze redirection, human reconstruction from monocular video, and relighting and novel-view synthesis. Notable advancements include the use of explicit 3D eyeball structures, part-based neural radiance fields, and hybrid surface-volumetric Gaussians. These innovations have enabled more realistic and editable head avatars, as well as improved estimation of human dance motion from egocentric video and music. Noteworthy papers include: Roll Your Eyes, which proposes a novel 3D gaze redirection framework that leverages an explicit 3D eyeball structure. MonoPartNeRF, which introduces a part-based pose embedding mechanism to guide pose-aware feature sampling. HumanOLAT, which provides a large-scale dataset for full-body human relighting and novel-view synthesis. SVG-Head, which proposes a hybrid representation that explicitly models the geometry with 3D Gaussians bound on a FLAME mesh. EgoMusic-driven Human Dance Motion Estimation with Skeleton Mamba, which develops a new method that predicts human dance motion from both egocentric video and music.