WorldAgents: Can Foundation Image Models be Agents for 3D World Models?
Abstract
2D image models possess inherent 3D world modeling capabilities that can be harnessed through an agentic framework for 3D scene synthesis and reconstruction.
Given the remarkable ability of 2D foundation image models to generate high-fidelity outputs, we investigate a fundamental question: do 2D foundation image models inherently possess 3D world model capabilities? To answer this, we systematically evaluate multiple state-of-the-art image generation models and Vision-Language Models (VLMs) on the task of 3D world synthesis. To harness and benchmark their potential implicit 3D capability, we propose an agentic framing to facilitate 3D world generation. Our approach employs a multi-agent architecture: a VLM-based director that formulates prompts to guide image synthesis, a generator that synthesizes new image views, and a VLM-backed two-step verifier that evaluates and selectively curates generated frames from both 2D image and 3D reconstruction space. Crucially, we demonstrate that our agentic approach provides coherent and robust 3D reconstruction, producing output scenes that can be explored by rendering novel views. Through extensive experiments across various foundation models, we demonstrate that 2D models do indeed encapsulate a grasp of 3D worlds. By exploiting this understanding, our method successfully synthesizes expansive, realistic, and 3D-consistent worlds.
Community
Shows that 2D foundation image models encode 3D world understanding, enabling a multi-agent Director-Generator-Verifier system to synthesize coherent, 3D-consistent scenes and novel-view reconstructions.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- RoamScene3D: Immersive Text-to-3D Scene Generation via Adaptive Object-aware Roaming (2026)
- SceneAssistant: A Visual Feedback Agent for Open-Vocabulary 3D Scene Generation (2026)
- One2Scene: Geometric Consistent Explorable 3D Scene Generation from a Single Image (2026)
- SkeleGuide: Explicit Skeleton Reasoning for Context-Aware Human-in-Place Image Synthesis (2026)
- HorizonForge: Driving Scene Editing with Any Trajectories and Any Vehicles (2026)
- Generation Models Know Space: Unleashing Implicit 3D Priors for Scene Understanding (2026)
- Code2Worlds: Empowering Coding LLMs for 4D World Generation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper