The field of human motion analysis and synthesis is rapidly advancing, with a focus on generating realistic and physically plausible motions in various environments. Recent developments have highlighted the importance of considering the interaction between humans and their surroundings, such as grasping objects in 3D scenes or navigating through crowded spaces. Researchers are exploring new methods for motion capture, reconstruction, and synthesis, including the use of wearable sensors, motion matching, and reinforcement learning. These advancements have the potential to improve applications in robotics, virtual reality, and human-computer interaction. Noteworthy papers include: MOGRAS, which introduces a large-scale dataset for human motion with grasping in 3D scenes, and Environment-aware Motion Matching, which presents a novel system for full-body character animation that adapts to obstacles and other agents. Additionally, Step2Motion proposes a method for reconstructing human locomotion from pressure sensing insoles, and PHUMA introduces a physically-grounded humanoid locomotion dataset that addresses physical artifacts and enables stable imitation.