Diffusion-based Methods for Human Pose Estimation and 3D Reconstruction

The field of human pose estimation and 3D reconstruction is rapidly advancing with the introduction of diffusion-based methods. These methods have shown significant improvements in accuracy and robustness, particularly in handling ambiguous and uncertain data. The use of diffusion models has enabled the development of probabilistic approaches that can capture pose distributions and uncertainties, leading to more accurate and reliable estimates. Notably, the integration of diffusion models with other techniques, such as contrastive learning and hierarchical planning, has further enhanced the performance of these methods. Overall, the field is moving towards more sophisticated and robust models that can handle complex and dynamic scenes. Noteworthy papers include: SDPose, which proposes a fine-tuning framework for human pose estimation using pre-trained diffusion models, achieving state-of-the-art performance on several benchmarks. LieHMR, which introduces a novel approach for human mesh recovery using SO(3) diffusion, effectively modeling the ambiguity of 3D human pose estimation from 2D observations.

Sources

SRHand: Super-Resolving Hand Images and 3D Shapes via View/Pose-aware Neural Image Representations and Explicit 3D Meshes

SDPose: Exploiting Diffusion Priors for Out-of-Domain and Robust Pose Estimation

Hierarchical Diffusion Motion Planning with Task-Conditioned Uncertainty-Aware Priors

LieHMR: Autoregressive Human Mesh Recovery with $SO(3)$ Diffusion

Contrastive Diffusion Guidance for Spatial Inverse Problems

Cascaded Diffusion Framework for Probabilistic Coarse-to-Fine Hand Pose Estimation

Zero-shot Human Pose Estimation using Diffusion-based Inverse solvers

Built with on top of