EmbodiedNav-Bench / README.md
trainandtest666's picture
Update README.md
cd11fee verified
metadata
license: cc-by-4.0
pretty_name: EmbodiedNav-Bench
language:
  - en
task_categories:
  - visual-question-answering
  - reinforcement-learning
tags:
  - embodied-ai
  - embodied-navigation
  - urban-airspace
  - drone-navigation
  - multimodal-reasoning
  - spatial-reasoning
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: test
        path: viewer-00000-of-00001.parquet

EmbodiedNav-Bench

GitHub arXiv

EmbodiedNav-Bench is a goal-oriented embodied navigation benchmark for evaluating spatial action in urban 3D airspace. The benchmark contains 5,037 high-quality navigation trajectories with natural-language navigation goals, initial drone poses, target positions, and ground-truth 3D trajectories.

This Hugging Face repository hosts the dataset artifacts. The accompanying project code, simulator setup, media examples, and evaluation scripts are maintained in the GitHub repository: https://github.com/serenditipy-AC/Embodied-Navigation-Bench

Dataset Summary

The benchmark contains 5,037 goal-oriented navigation trajectories. Each sample corresponds to one navigation task in an urban 3D environment, with a natural-language goal description and a human-collected ground-truth trajectory.

The dataset is intended for evaluating embodied navigation, spatial reasoning, and multimodal decision-making models in urban airspace scenarios.

Repository Contents

Path Description
navi_data.pkl Canonical PKL file for evaluation.
viewer-00000-of-00001.parquet Parquet representation for the Hugging Face Dataset Viewer table.

Data Fields

The canonical PKL file stores a list of Python dictionaries. Each sample contains the following fields:

Field Type Description
sample_index int Case index.
start_pos float[3] Initial drone world position (x, y, z).
start_rot float[3] Initial drone orientation (roll, pitch, yaw) in radians.
start_ang float Initial camera gimbal angle in degrees.
task_desc str Natural-language navigation instruction.
target_pos float[3] Target world position (x, y, z).
gt_traj float[N,3] Ground-truth trajectory points.
gt_traj_len float Ground-truth trajectory length.

The Parquet table includes the same structured fields and additional convenience columns such as sample_index, start_x, start_y, start_z, target_x, target_y, target_z, and gt_traj_num_points. The folder field is omitted from the table because sample_index provides the browsing index. The Parquet file is provided for browsing and visualization in the Hugging Face Dataset Viewer.

Usage

For evaluation, use navi_data.pkl as the canonical data file and follow the setup instructions in the GitHub project repository.

License

This dataset is released under the CC-BY-4.0 license.

Citation

@misc{zhao2026farlargemultimodalmodels,
      title={How Far Are Large Multimodal Models from Human-Level Spatial Action? A Benchmark for Goal-Oriented Embodied Navigation in Urban Airspace},
      author={Baining Zhao and Ziyou Wang and Jianjie Fang and Zile Zhou and Yanggang Xu and Yatai Ji and Jiacheng Xu and Qian Zhang and Weichen Zhang and Chen Gao and Xinlei Chen},
      year={2026},
      eprint={2604.07973},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/html/2604.07973v1},
}