Datasets:
info dict | licenses listlengths 0 0 | categories listlengths 20 20 | images listlengths 114 2.15k | annotations listlengths 2.24k 3.1k |
|---|---|---|---|---|
{
"description": "ZeroClaw Isaac Sim data β cdn_warehouse_v00",
"version": "1.0"
} | [] | [{"id":1,"name":"wall","supercategory":"structure"},{"id":2,"name":"floor","supercategory":"structur(...TRUNCATED) | [{"id":1,"file_name":"cdn_warehouse_v00_wp0000_yaw000.png","width":640,"height":480,"waypoint":"cdn_(...TRUNCATED) | [{"id":1,"image_id":1,"category_id":2,"bbox":[0,0,640,480],"area":307200,"iscrowd":0,"semantic_id":2(...TRUNCATED) |
{
"description": "ZeroClaw Isaac Sim data β hospital_v00",
"version": "1.0"
} | [] | [{"id":1,"name":"wall","supercategory":"structure"},{"id":2,"name":"floor","supercategory":"structur(...TRUNCATED) | [{"id":1,"file_name":"hospital_v00_wp0000_yaw000.png","width":640,"height":480,"waypoint":"hospital_(...TRUNCATED) | [{"id":1,"image_id":1,"category_id":1,"bbox":[0,0,640,480],"area":307200,"iscrowd":0,"semantic_id":1(...TRUNCATED) |
{
"description": "ZeroClaw Isaac Sim data β hospital_v01",
"version": "1.0"
} | [] | [{"id":1,"name":"wall","supercategory":"structure"},{"id":2,"name":"floor","supercategory":"structur(...TRUNCATED) | [{"id":1,"file_name":"hospital_v01_wp0000_yaw000.png","width":640,"height":480,"waypoint":"hospital_(...TRUNCATED) | [{"id":1,"image_id":1,"category_id":1,"bbox":[0,0,640,480],"area":307200,"iscrowd":0,"semantic_id":1(...TRUNCATED) |
Yonder: A 4.65M-Frame Drone-Perspective Dataset for Indoor Navigation
The cross-simulator generalization gap. Yonder is the largest publicly available drone-perspective dataset for indoor navigation, plus a closed-loop benchmark designed to expose a failure mode invisible to standard offline metrics: perception trained on one simulator does not transfer cleanly to a different simulator, even when both target the same task.
This dataset accompanies the NeurIPS 2026 Datasets & Benchmarks submission: "Yonder: A 4.65M-Frame Drone Navigation Dataset and the Cross-Simulator Generalization Gap."
Headline numbers (paper subset)
- 4,650,324 drone-perspective frames
- 387,527 waypoint NPZ files (one per waypoint Γ 12 yaws)
- 167 indoor 3D environments (all from HSSD, all with semantic annotations)
- 52 sensor arrays per NPZ (stereo RGB, depth, IR, LiDAR-360, semantic segmentation, pose, IMU)
- ~3.3 TB total
What's in a waypoint
Every waypoint NPZ contains a single drone pose with 12 yaw orientations. For each yaw:
| Sensor | Resolution / Format |
|---|---|
| Left RGB | 640Γ480, uint8 |
| Right RGB | 640Γ480, uint8 |
| Forward depth | 640Γ480, float16 (meters) |
| Landing camera | 640Γ480, uint8 (downward) |
| Up IR / Down IR | 640Γ480, uint8 |
| LiDAR-360 | 1024 Γ 16 channel, float32 (meters) |
| Position / Orientation / IMU | float32 (Habitat-Sim world frame) |
| Semantic segmentation | 640Γ480 instance + class IDs (all 167 scenes) |
Source environments
Yonder is rendered from a single open-source 3D scene dataset:
| Source | License | Scenes | Waypoints | Has Semantics |
|---|---|---|---|---|
| HSSD (Habitat Synthetic Scenes Dataset) | CC-BY-NC-4.0 | 167 | 387,527 | 167 of 167 |
Earlier collection passes also covered ReplicaCAD (84 scenes, CC-BY-4.0, no semantics), Replica (Meta research-only terms), and HM3D (Matterport academic EULA). Replica and HM3D were excluded because their upstream licenses do not permit open redistribution of derivative renders. ReplicaCAD was excluded because it lacks semantic annotations and was not used in any reported experiment, in service of a single-source, fully-experiment-relevant artifact. Yonder ships its own rendered observations only β no upstream meshes are redistributed.
Preview gallery
A curated frame from each of the 227 scenes (167 HSSD-derived
indoor + 60 Isaac-sim-native). Frames are picked from each scene's actual NPZ drone
trajectory by an edge-density + spatial-variance scoring heuristic over 10 waypoints Γ 12
yaws of candidate frames per scene. Full gallery: previews/INDEX.md.
License
- Dataset (this repo): CC-BY-NC-4.0 inheriting HSSD's NonCommercial restriction. HSSD attribution preserved per source license.
- Code, model checkpoints, benchmark: Apache-2.0 (see linked code repo).
Quick start
from huggingface_hub import snapshot_download
import numpy as np
# Download a single scene
path = snapshot_download(
repo_id="astralhf/yonder",
repo_type="dataset",
allow_patterns="indoor/drone-data/augmented/hssd-102343992/*.npz",
)
# Load a waypoint
data = np.load(f"{path}/indoor/drone-data/augmented/hssd-102343992/hssd-102343992_wp0000.npz")
left_rgb_yaw0 = data["yaw000_left_rgb"] # shape (480, 640, 3)
forward_depth_yaw0 = data["yaw000_forward_depth"] # shape (480, 640) float16
lidar = data["lidar360"] # shape (1024, 16) float32
Reviewer sample
For NeurIPS reviewers and others who want a small smoke-test before downloading
multiple TB, see the companion subset:
astralhf/yonder-sample
(~500 MB, one HSSD scene, all 12 yaws per waypoint).
Repository layout
indoor/drone-data/augmented/ β paper's primary subset (4.65M frames)
βββ hssd-102343992/
β βββ manifest.json
β βββ hssd-102343992_wp0000.npz
β βββ ...
βββ hssd-*/ (167 scene dirs)
annotations/ β COCO-format detection labels
βββ hssd-*/
βββ annotations.json (per-scene bbox annotations)
βββ object_inventory.json (per-scene object catalog)
indoor/isaac-sim-native/ β cross-simulator evaluation subset
βββ scenes/ (60 Isaac-rendered indoor scenes)
βββ annotations/
outdoor/ β sibling resources (not part of the NeurIPS paper)
βββ boreal/ coastal/ desert/ forest/ lunar/
βββ infinigen/ (procedural Infinigen scenes)
βββ carla-cities/ (8 CARLA towns, full UE4 city geometry)
βββ carla-roads/ (8 CARLA towns, drivable surface only)
Each scene directory under indoor/drone-data/augmented/ contains:
manifest.jsonβ scene-level metadata (scene_id, sampling parameters, total_waypoints, total_frames, unique_object_ids).<scene_id>_wp####.npzβ one NPZ file per waypoint, each holding 12 yaws across all sensor modalities.
Cross-simulator evaluation subset (indoor/isaac-sim-native/)
Yonder's central thesis β that perception trained on one simulator does not transfer
cleanly to another β requires a different-simulator evaluation set. We provide
60 Isaac-rendered indoor scenes (warehouse, hospital, office variants) under
indoor/isaac-sim-native/scenes/, with companion annotations under
indoor/isaac-sim-native/annotations/. These scenes are used for closed-loop
navigation evaluation in the paper.
Sibling resources (not part of the NeurIPS paper)
The repository also hosts outdoor scene assets for separate research not described in the Yonder paper. These are USD scene geometry (different schema from the indoor waypoint NPZs):
outdoor/{boreal,coastal,desert,forest,lunar}/β biome-specific scenes with per-biomepreviews/,prototypes/, andscenes/directories.outdoor/infinigen/β procedural scenes generated with Infinigen, 16 scenes spanning canyon, coast, desert, forest, mountain biomes.outdoor/carla-cities/β full UE4 city geometry from CARLA's 8 towns, ~14k mesh instances total. Open in Isaac Sim withomni.usd.get_context().open_stage("outdoor/carla-cities/Town03/scene.usd").outdoor/carla-roads/β drivable-surface USD only, derived from CARLA OpenDRIVE (MIT-licensed). 8 towns, plus 380k+ semantic axis-aligned bboxes per town.
These outdoor resources are governed by their respective upstream licenses (CARLA: MIT; Infinigen: BSD-3-Clause). They are not part of the NeurIPS paper's claims and should be evaluated against their own licenses if used.
Splits
Yonder is released without pre-defined train/val/test splits. The paper's experiments use a held-out set of 10% of waypoints (uniformly sampled across scenes); see the paper for the exact split file (also released alongside the code).
Intended use
β Recommended:
- Training drone-perspective perception models (open-vocabulary detection, monocular depth, semantic segmentation).
- Studying cross-simulator generalization. Pair offline metrics with closed-loop evaluation in a different simulator. The whole point of Yonder is that doing only the former is misleading.
- Benchmarking long-horizon visual-language navigation, when paired with a closed-loop evaluator.
β Use with caution:
- End-to-end navigation policy training. Yonder is a perception-training resource; we do not provide expert trajectories for behavior cloning.
- Any metric reported on Yonder's offline evaluation split alone, without a closed-loop counterpart, may not reflect deployment performance.
π« Not for:
- Commercial use (the entire dataset inherits HSSD's CC-BY-NC restriction).
- Surveillance, biometric identification, or any application of open-vocabulary detection to identify specific real persons. The simulated scenes contain no real persons; transfer to person-identification tasks is out of scope and expressly disallowed.
Responsible AI considerations
- No real persons. Yonder is rendered entirely from synthetic 3D scenes (HSSD); no humans are present in any frame. No PII, no biometric data, no faces.
- Synthetic-only domain. Performance on Yonder does not transfer to real imagery without explicit sim-to-real treatment. Anyone deploying perception trained on Yonder in the real world must perform their own real-domain validation.
- Geographic / cultural bias. HSSD scenes are biased toward Western residential interiors. Models trained on Yonder may underperform on interior styles outside this distribution.
- Cross-simulator evaluation is mandatory. The dataset's primary contribution is making it easy to discover that fine-tuning gains can be illusory. Models reported on Yonder should be validated in a different simulator (or the real world) before claims of improvement are made.
See the accompanying Croissant metadata (yonder.croissant.json) for machine-readable
RAI fields.
Citation
@inproceedings{anonymous2026yonder,
title = {Yonder: A 4.65M-Frame Drone Navigation Dataset and the Cross-Simulator Generalization Gap},
author = {Anonymous Author(s)},
booktitle = {Advances in Neural Information Processing Systems (Datasets and Benchmarks Track)},
year = {2026},
note = {Anonymized for double-blind review.}
}
Authors and contact
Authors and affiliation are anonymized for NeurIPS double-blind review. After review, the camera-ready version will list authors and contact information here.
Changelog
- 2026-05-01 β Initial release: 167 HSSD scenes / 387,527 waypoints / 4.65M frames / semantic annotations for all 167. HM3D, Replica, and ReplicaCAD subsets removed prior to release (HM3D and Replica for license incompatibility; ReplicaCAD for lacking semantic annotations and not being used in any reported experiment). Three Habitat test scenes (apartment_1, skokloster-castle, van-gogh-room) also excluded for upstream-license incompatibility.
- Downloads last month
- 2,986















