Toward Physically Consistent Driving Video World Models under Challenging Trajectories
Abstract
PhyGenesis is a world model that generates high-fidelity driving videos with physical consistency by transforming invalid trajectories into plausible conditions and using a physics-enhanced video generator trained on real and simulated driving scenarios.
Video generation models have shown strong potential as world models for autonomous driving simulation. However, existing approaches are primarily trained on real-world driving datasets, which mostly contain natural and safe driving scenarios. As a result, current models often fail when conditioned on challenging or counterfactual trajectories-such as imperfect trajectories generated by simulators or planning systems-producing videos with severe physical inconsistencies and artifacts. To address this limitation, we propose PhyGenesis, a world model designed to generate driving videos with high visual fidelity and strong physical consistency. Our framework consists of two key components: (1) a physical condition generator that transforms potentially invalid trajectory inputs into physically plausible conditions, and (2) a physics-enhanced video generator that produces high-fidelity multi-view driving videos under these conditions. To effectively train these components, we construct a large-scale, physics-rich heterogeneous dataset. Specifically, in addition to real-world driving videos, we generate diverse challenging driving scenarios using the CARLA simulator, from which we derive supervision signals that guide the model to learn physically grounded dynamics under extreme conditions. This challenging-trajectory learning strategy enables trajectory correction and promotes physically consistent video generation. Extensive experiments demonstrate that PhyGenesis consistently outperforms state-of-the-art methods, especially on challenging trajectories. Our project page is available at: https://wm-research.github.io/PhyGenesis/.
Community
PhyGenesis generates physically consistent driving videos from challenging trajectories using a physical condition generator and physics-enhanced video generator trained on CARLA data.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ContactGaussian-WM: Learning Physics-Grounded World Model from Videos (2026)
- Bridging Scene Generation and Planning: Driving with World Model via Unifying Vision and Motion Representation (2026)
- Physical Simulator In-the-Loop Video Generation (2026)
- PhysVideo: Physically Plausible Video Generation with Cross-View Geometry Guidance (2026)
- V-Dreamer: Automating Robotic Simulation and Trajectory Synthesis via Video Generation Priors (2026)
- GA-Drive: Geometry-Appearance Decoupled Modeling for Free-viewpoint Driving Scene Generation (2026)
- Motion Forcing: A Decoupled Framework for Robust Video Generation in Motion Dynamics (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2603.24506 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper