Datasets:
Formats:
parquet
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
embodied-ai
embodied-navigation
urban-airspace
drone-navigation
multimodal-reasoning
spatial-reasoning
License:
Fix dataset card arXiv citation
Browse filesCorrect the dataset card arXiv badge and BibTeX citation from arXiv:2505.19789 to arXiv:2604.07973.
README.md
CHANGED
|
@@ -25,7 +25,7 @@ configs:
|
|
| 25 |
# EmbodiedNav-Bench
|
| 26 |
|
| 27 |
[](https://github.com/serenditipy-AC/Embodied-Navigation-Bench)
|
| 28 |
-
[](https://github.com/serenditipy-AC/Embodied-Navigation-Bench)
|
| 28 |
+
[](https://arxiv.org/abs/2604.07973)
|
| 29 |
|
| 30 |
EmbodiedNav-Bench is a goal-oriented embodied navigation benchmark for evaluating spatial action in urban 3D airspace. The dataset provides natural-language navigation goals, initial drone poses, target positions, and ground-truth 3D trajectories for embodied navigation evaluation.
|
| 31 |
|
|
|
|
| 82 |
## Citation
|
| 83 |
|
| 84 |
```bibtex
|
| 85 |
+
@misc{zhao2026farlargemultimodalmodels,
|
| 86 |
+
title={How Far Are Large Multimodal Models from Human-Level Spatial Action? A Benchmark for Goal-Oriented Embodied Navigation in Urban Airspace},
|
| 87 |
+
author={Baining Zhao and Ziyou Wang and Jianjie Fang and Zile Zhou and Yanggang Xu and Yatai Ji and Jiacheng Xu and Qian Zhang and Weichen Zhang and Chen Gao and Xinlei Chen},
|
| 88 |
year={2026},
|
| 89 |
+
eprint={2604.07973},
|
| 90 |
archivePrefix={arXiv},
|
| 91 |
+
primaryClass={cs.AI},
|
| 92 |
+
url={https://arxiv.org/abs/2604.07973},
|
| 93 |
}
|
| 94 |
```
|