The field of computer vision and robotics is moving towards more accurate and realistic 3D object modeling and human-object interaction understanding. Recent developments have focused on creating visually-accurate digital twin object models, amodal completion for human-object interaction, and generating articulated objects with physical plausibility constraints. These advancements have the potential to improve simulations, virtual reality, and robot policy fine-tuning. Noteworthy papers include: Omni-Scan, which proposes a pipeline for producing high-quality 3D Gaussian Splat models using a bi-manual robot. Contact-Aware Amodal Completion, which develops a new approach for amodal completion using physical prior knowledge and multi-regional inpainting. Guiding Diffusion-Based Articulated Object Generation, which proposes a novel diffusion model-based approach for generating articulated objects that aligns them with partial point clouds and improves their physical plausibility. Point-Based Shape Representation Generation, which proposes a diffusion model designed to generate point-based shape representations with correspondences. MagicHOI, which presents a method for reconstructing hands and objects from short monocular interaction videos, even under limited viewpoint variation.