Advances in Human Motion and Interaction Modeling

The field of human motion and interaction modeling is rapidly advancing, with a focus on developing more realistic and nuanced models of human behavior. Recent research has emphasized the importance of capturing complex interactions between humans and objects, as well as modeling subtle movements and emotions. One notable trend is the use of large language models and diffusion-based approaches to generate more realistic and controllable human motions. Additionally, there is a growing interest in developing methods for estimating articulated object models and generating realistic hand-object interactions. These advances have significant implications for a range of applications, including computer graphics, robotics, and virtual reality. Noteworthy papers in this area include InterPose, which introduces a large-scale dataset for human-object interaction, and SMooGPT, which proposes a novel approach for stylized motion generation using large language models. These papers demonstrate the potential for significant advancements in human motion and interaction modeling, and highlight the importance of continued research in this area.

Sources

Learning Dolly-In Filming From Demonstration Using a Ground-Based Robot

InterPose: Learning to Generate Human-Object Interactions from Large-Scale Web Videos

FantasyHSI: Video-Generation-Centric 4D Human Synthesis In Any Scene through A Graph-based Multi-Agent Framework

Articulated Object Estimation in the Wild

Think2Sing: Orchestrating Structured Motion Subtitles for Singing-Driven 3D Head Animation

Towards Realistic Hand-Object Interaction with Gravity-Field Based Diffusion Bridge

Human Motion Video Generation: A Survey

SMooGPT: Stylized Motion Generation using Large Language Models

PAOLI: Pose-free Articulated Object Learning from Sparse-view Images

Virtual Fitting Room: Generating Arbitrarily Long Videos of Virtual Try-On from a Single Image -- Technical Preview

Evaluating Idle Animation Believability: a User Perspective

ManipDreamer3D : Synthesizing Plausible Robotic Manipulation Video with Occupancy-aware 3D Trajectory

Zero-shot 3D-Aware Trajectory-Guided image-to-video generation via Test-Time Training

Built with on top of