The field of video generation and prediction is moving towards incorporating physics-informed models to improve the realism and accuracy of generated videos. Researchers are exploring various approaches, including integrating physics simulators with video diffusion models, using Bayesian intention inference, and leveraging vision-language frameworks to predict trajectories and generate physically plausible motion. These innovative methods have shown significant improvements over traditional approaches, enabling more realistic and controllable video generation. Noteworthy papers include: ControlHair, which introduces a physics-informed video diffusion framework for controllable dynamic hair rendering. Generating Stable Placements via Physics-guided Diffusion Models, which integrates stability directly into the sampling process of a diffusion model to generate stable placements. Enhancing Physical Plausibility in Video Generation by Reasoning the Implausibility, which improves physical plausibility at inference time by explicitly reasoning about implausibility and guiding the generation away from it.