wanderland / README.md
Gaaaavin's picture
Update README.md
6cb47aa verified
---
license: apache-2.0
task_categories:
- robotics
tags:
- 3d-reconstruction
- novel-view-synthesis
- embodied-ai
- navigation
- urban-scenes
- gaussian-splatting
- colmap
size_categories:
- 100K<n<1M
---
# Wanderland Dataset
<div align="center">
[![arXiv](https://img.shields.io/badge/arXiv-2511.20620-red?logo=arxiv)](https://arxiv.org/abs/2511.20620)
[![Website](https://img.shields.io/badge/🔮_Website-ai4ce.github.io-blue)](https://ai4ce.github.io/wanderland/)
[![GitHub](https://img.shields.io/badge/GitHub-ai4ce%2Fwanderland-black?logo=github)](https://github.com/ai4ce/wanderland)
</div>
## Dataset Description
**Wanderland** is a large-scale urban dataset designed for geometrically grounded simulation and open-world embodied AI research. The dataset contains **diverse urban scenes** captured with dual fisheye cameras, providing high-quality data for 3D reconstruction, novel view synthesis, and navigation tasks.
### Key Features
- **Urban Scenes**: Diverse outdoor environments with varying complexity
- **Multi-Modal Data**: RGB images, depth, 3D point clouds, 3D Gaussian Splatting models
- **Camera Data**: Fisheye images + undistorted pinhole images (800×800, 90° FOV)
- **3D Reconstructions**: COLMAP sparse models + dense point clouds + 3DGS models
- **Navigation Data**: Isaac Sim compatible scene files (USDZ) + episode configurations
- **Official Splits**: 235 training scenes + 200 evaluation scenes (as used in the paper)
### Supported Tasks
- **3D Reconstruction**: Multi-view stereo, structure-from-motion, depth estimation
- **Novel View Synthesis**: NeRF, 3D Gaussian Splatting, view interpolation
- **Embodied AI Navigation**: Visual navigation, path planning, sim-to-real transfer
- **Scene Understanding**: 3D scene parsing, object detection, spatial reasoning
## Dataset Statistics (V1)
| Metric | Value |
|--------|-------|
| **Total Scenes** | 435 |
| **Training Scenes** | 235 |
| **Evaluation Scenes** | 200 |
| **Images per Scene** | ~200-~1000 (varies) |
| **Total Images** | ~420,000 |
| **Image Resolution (Undistorted)** | 800×800 |
| **Image Resolution (Fisheye)** | 2K |
| **Camera Model** | Dual fisheye → Pinhole projection |
| **Point Cloud Size** | 1-10M points per scene |
| **Total Dataset Size** | ~1.24TB |
## Dataset Structure
Each scene in the dataset contains the following files and directories:
```
data/
└── <scene_name>/
├── fisheye.tar.gz # Original fisheye images (JPG, 1920×1080)
├── fisheye_mask.tar.gz # Validity masks for fisheye images
├── images.tar.gz # Undistorted images (PNG, 800×800, 90° FOV)
├── images_mask.tar.gz # Validity masks for undistorted images
├── raw_pcd.ply # Dense 3D point cloud (PLY format)
├── 3dgs.ply # Pre-trained 3D Gaussian Splatting model
├── transforms.json # Camera parameters (intrinsics + extrinsics)
├── scene.usdz # Isaac Sim compatible scene file
├── episodes.json # Navigation episode configurations
├── sparse/ # COLMAP sparse reconstruction
│ └── 0/
│ ├── cameras.bin # Camera intrinsics (PINHOLE model)
│ ├── images.bin # Camera poses (quaternion + translation)
│ └── points3D.bin # Sparse 3D points
└── nvs_split/ # Train/val splits for novel view synthesis
├── train.txt # Training images (per-scene split)
└── val.txt # Validation images (per-scene split)
```
### File Descriptions
**Image Data:**
- `images/`: Undistorted pinhole images (800×800, 90° FOV, PNG format)
- `images_mask/`: Validity masks indicating valid pixel regions
- `fisheye/`: Original fisheye images (JPG format)
- `fisheye_mask/`: Validity masks for fisheye images
**3D Data:**
- `raw_pcd.ply`: Dense point cloud with RGB colors (PLY format)
- `3dgs.ply`: Pre-trained 3D Gaussian Splatting model
- `sparse/0/`: COLMAP sparse reconstruction (cameras, poses, sparse points)
**Camera Parameters:**
- `transforms.json`: Complete camera parameters (intrinsics, extrinsics, distortion)
- Coordinate system: COLMAP convention (camera-to-world)
**Navigation Data:**
- `scene.usdz`: USD scene file for NVIDIA Isaac Sim
- `episodes.json`: Navigation episode configurations
**Data Splits:**
- `nvs_split/`: Per-scene image splits for novel view synthesis
- `train_scenes_v1.txt`: Scene-level training split (235 scenes)
- `eval_scenes_v1.txt`: Scene-level evaluation split (200 scenes)
### Camera Models
**Fisheye Camera (Original):**
- Distortion: 4-parameter fisheye model (k1, k2, k3, k4)
- Dual camera setup (left + right)
**Undistorted Camera (Processed):**
- Model: PINHOLE (rectilinear projection)
- Intrinsics: fx=fy=400.0, cx=cy=400.0
- Resolution: 800×800 pixels
- Field of view: 90 degrees
**Coordinate System:**
- Camera poses follow COLMAP convention
- Right-handed coordinate system
- Units: Meters
## Download Instructions
For complete download instructions, options, and examples, see the [download README](https://github.com/ai4ce/wanderland/tree/main/download).
## License
This dataset is released under the **Apache 2.0 License**. See the [LICENSE](https://github.com/ai4ce/wanderland/blob/main/LICENSE) file for details.
## Citation
If you use the Wanderland dataset in your research, please cite:
```bibtex
@article{liu2025wanderland,
title={Wanderland: Geometrically Grounded Simulation for Open-World Embodied AI},
author={Liu, Xinhao and Li, Jiaqi and Deng, Youming and Chen, Ruxin and Zhang, Yingjia and Ma, Yifei and Guo, Li and Li, Yiming and Zhang, Jing and Feng, Chen},
journal={arXiv preprint arXiv:2511.20620},
year={2025}
}
```
## Links
- **Paper**: [arXiv:2511.20620](https://arxiv.org/abs/2511.20620)
- **Project Page**: [ai4ce.github.io/wanderland](https://ai4ce.github.io/wanderland/)
- **GitHub Repository**: [github.com/ai4ce/wanderland](https://github.com/ai4ce/wanderland)
- **Download Tool**: [Download README](https://github.com/ai4ce/wanderland/tree/main/download)