Datasets:
Formats:
parquet
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
embodied-ai
embodied-navigation
urban-airspace
drone-navigation
multimodal-reasoning
spatial-reasoning
License:
File size: 3,958 Bytes
d76cf4f 12fc4dd d76cf4f 8e4c284 65f2229 d76cf4f 40fe28f d76cf4f ece49a2 1300f03 d76cf4f 12fc4dd d76cf4f ece49a2 d76cf4f ece49a2 d76cf4f 12fc4dd ece49a2 65f2229 ece49a2 d76cf4f cd11fee ece49a2 65f2229 d76cf4f ece49a2 d76cf4f 8e4c284 d76cf4f 40fe28f d76cf4f 12fc4dd d76cf4f 8e4c284 65f2229 d76cf4f ece49a2 d76cf4f 8e4c284 d76cf4f ece49a2 ae7171e ece49a2 ae7171e ece49a2 ae7171e 1300f03 ece49a2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | ---
license: cc-by-4.0
pretty_name: EmbodiedNav-Bench
language:
- en
task_categories:
- visual-question-answering
- reinforcement-learning
tags:
- embodied-ai
- embodied-navigation
- urban-airspace
- drone-navigation
- multimodal-reasoning
- spatial-reasoning
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: test
path: viewer-00000-of-00001.parquet
---
# EmbodiedNav-Bench
[](https://github.com/serenditipy-AC/Embodied-Navigation-Bench)
[](https://arxiv.org/html/2604.07973v1)
EmbodiedNav-Bench is a goal-oriented embodied navigation benchmark for evaluating spatial action in urban 3D airspace. The benchmark contains 5,037 high-quality navigation trajectories with natural-language navigation goals, initial drone poses, target positions, and ground-truth 3D trajectories.
This Hugging Face repository hosts the dataset artifacts. The accompanying project code, simulator setup, media examples, and evaluation scripts are maintained in the GitHub repository: https://github.com/serenditipy-AC/Embodied-Navigation-Bench
## Dataset Summary
The benchmark contains 5,037 goal-oriented navigation trajectories. Each sample corresponds to one navigation task in an urban 3D environment, with a natural-language goal description and a human-collected ground-truth trajectory.
The dataset is intended for evaluating embodied navigation, spatial reasoning, and multimodal decision-making models in urban airspace scenarios.
## Repository Contents
| Path | Description |
| :-- | :-- |
| `navi_data.pkl` | Canonical PKL file for evaluation. |
| `viewer-00000-of-00001.parquet` | Parquet representation for the Hugging Face Dataset Viewer table. |
## Data Fields
The canonical PKL file stores a list of Python dictionaries. Each sample contains the following fields:
| Field | Type | Description |
| :-- | :-- | :-- |
| `sample_index` | `int` | Case index. |
| `start_pos` | `float[3]` | Initial drone world position `(x, y, z)`. |
| `start_rot` | `float[3]` | Initial drone orientation `(roll, pitch, yaw)` in radians. |
| `start_ang` | `float` | Initial camera gimbal angle in degrees. |
| `task_desc` | `str` | Natural-language navigation instruction. |
| `target_pos` | `float[3]` | Target world position `(x, y, z)`. |
| `gt_traj` | `float[N,3]` | Ground-truth trajectory points. |
| `gt_traj_len` | `float` | Ground-truth trajectory length. |
The Parquet table includes the same structured fields and additional convenience columns such as `sample_index`, `start_x`, `start_y`, `start_z`, `target_x`, `target_y`, `target_z`, and `gt_traj_num_points`. The `folder` field is omitted from the table because `sample_index` provides the browsing index. The Parquet file is provided for browsing and visualization in the Hugging Face Dataset Viewer.
## Usage
<!-- The Dataset Viewer-compatible table can be loaded with the `datasets` library:
```python
from datasets import load_dataset
ds = load_dataset("EmbodiedCity/EmbodiedNav-Bench", split="viewer")
print(ds[0])
```
-->
For evaluation, use `navi_data.pkl` as the canonical data file and follow the setup instructions in the GitHub project repository.
## License
This dataset is released under the CC-BY-4.0 license.
## Citation
```bibtex
@misc{zhao2026farlargemultimodalmodels,
title={How Far Are Large Multimodal Models from Human-Level Spatial Action? A Benchmark for Goal-Oriented Embodied Navigation in Urban Airspace},
author={Baining Zhao and Ziyou Wang and Jianjie Fang and Zile Zhou and Yanggang Xu and Yatai Ji and Jiacheng Xu and Qian Zhang and Weichen Zhang and Chen Gao and Xinlei Chen},
year={2026},
eprint={2604.07973},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/html/2604.07973v1},
}
```
|