LiveWorld: Simulating Out-of-Sight Dynamics in Generative Video World Models
Abstract
LiveWorld addresses the out-of-sight dynamics problem in video world models by introducing a persistent global state representation that maintains continuous evolution of dynamic entities beyond the observer's field of view.
Recent generative video world models aim to simulate visual environment evolution, allowing an observer to interactively explore the scene via camera control. However, they implicitly assume that the world only evolves within the observer's field of view. Once an object leaves the observer's view, its state is "frozen" in memory, and revisiting the same region later often fails to reflect events that should have occurred in the meantime. In this work, we identify and formalize this overlooked limitation as the "out-of-sight dynamics" problem, which impedes video world models from representing a continuously evolving world. To address this issue, we propose LiveWorld, a novel framework that extends video world models to support persistent world evolution. Instead of treating the world as static observational memory, LiveWorld models a persistent global state composed of a static 3D background and dynamic entities that continue evolving even when unobserved. To maintain these unseen dynamics, LiveWorld introduces a monitor-based mechanism that autonomously simulates the temporal progression of active entities and synchronizes their evolved states upon revisiting, ensuring spatially coherent rendering. For evaluation, we further introduce LiveBench, a dedicated benchmark for the task of maintaining out-of-sight dynamics. Extensive experiments show that LiveWorld enables persistent event evolution and long-term scene consistency, bridging the gap between existing 2D observation-based memory and true 4D dynamic world simulation. The baseline and benchmark will be publicly available at https://zichengduan.github.io/LiveWorld/index.html.
Community
Introducing LiveWorld: Simulating Out-of-Sight Dynamics in Generative Video World Models ๐
Current video world models have a critical flaw: they freeze objects the moment they leave the camera's view, completely ignoring elapsed time. We formalize this as the Out-of-Sight Dynamics problem.
LiveWorld solves this by explicitly decoupling World Evolution from Observation Rendering:
๐น Virtual Monitors: We register "Monitors" that autonomously fast-forward the temporal progression of unobserved active entities in the background. When you look back, their states are up-to-date.
๐น Tractable Efficiency: We factorize the world into a static 3D background (accumulated via SLAM) and sparse dynamic entities, keeping computation highly manageable.
๐น LiveBench: We also introduce the first dedicated benchmark for evaluating long-horizon, out-of-sight dynamics.
With such a design, LiveWorld narrows the gap closer between static memory and persistent 4D world simulation
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
