A Mechanistic View on Video Generation as World Models: State and Dynamics
Abstract
Video generation models are categorized based on state construction and dynamics modeling approaches, with emphasis on transitioning evaluation metrics from visual quality to functional capabilities like physical persistence and causal reasoning.
Large-scale video generation models have demonstrated emergent physical coherence, positioning them as potential world models. However, a gap remains between contemporary "stateless" video architectures and classic state-centric world model theories. This work bridges this gap by proposing a novel taxonomy centered on two pillars: State Construction and Dynamics Modeling. We categorize state construction into implicit paradigms (context management) and explicit paradigms (latent compression), while dynamics modeling is analyzed through knowledge integration and architectural reformulation. Furthermore, we advocate for a transition in evaluation from visual fidelity to functional benchmarks, testing physical persistence and causal reasoning. We conclude by identifying two critical frontiers: enhancing persistence via data-driven memory and compressed fidelity, and advancing causality through latent factor decoupling and reasoning-prior integration. By addressing these challenges, the field can evolve from generating visually plausible videos to building robust, general-purpose world simulators.
Community
While large-scale video generation models show signs of emergent physical coherence, they remain distinct from true world models. A critical gap persists between modern "stateless" video architectures and the "state-centric" requirements of classic control theory. This survey bridges that divide. We propose a new taxonomy built on State Construction (implicit context vs. explicit latent compression) and Dynamics Modeling. We argue that the field must transition its evaluation standards from simple visual fidelity to functional benchmarks—specifically testing for physical persistence and causal reasoning. Finally, we outline the path forward: solving data-driven memory for persistence and integrating reasoning priors for causality. These steps are essential to transition the field from merely generating visually plausible videos to building robust, general-purpose world simulators.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper