EmbodiedNav-Bench / README.md
serendipityAc2Win's picture
Clean dataset card and remove redundant project files
40fe28f verified
|
raw
history blame
2.81 kB
metadata
license: cc-by-4.0
pretty_name: EmbodiedNav-Bench
language:
  - en
task_categories:
  - visual-question-answering
  - reinforcement-learning
tags:
  - embodied-ai
  - embodied-navigation
  - urban-airspace
  - drone-navigation
  - multimodal-reasoning
  - spatial-reasoning
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-00000-of-00001.parquet

EmbodiedNav-Bench

EmbodiedNav-Bench is a goal-oriented embodied navigation benchmark for evaluating spatial action in urban 3D airspace. This Hugging Face dataset repository hosts the released navigation sample data and a Dataset Viewer compatible table. Code, simulator instructions, examples, and evaluation scripts are maintained in the GitHub project repository: https://github.com/serenditipy-AC/Embodied-Navigation-Bench

Files

  • dataset/navi_data.pkl: canonical PKL file for evaluation.
  • dataset/navi_data_preview.json: human-readable preview of the PKL content.
  • data/train-00000-of-00001.parquet: Parquet conversion for the Hugging Face Dataset Viewer Table.

Dataset Contents

The current release contains 300 public example trajectories. Each row/sample corresponds to one navigation trajectory with a natural-language goal, initial drone pose, target position, and ground-truth 3D trajectory.

Field Type Description
folder str Scene folder identifier
start_pos float[3] Initial drone world position (x, y, z)
start_rot float[3] Initial drone orientation (roll, pitch, yaw) in radians
start_ang float Initial camera gimbal angle in degrees
task_desc str Natural-language navigation instruction
target_pos float[3] Target world position (x, y, z)
gt_traj float[N,3] Ground-truth trajectory points
gt_traj_len float Ground-truth trajectory length

The Parquet table additionally includes convenience columns such as sample_index, start_x, start_y, start_z, target_x, target_y, target_z, and gt_traj_num_points to make browsing and filtering easier.

Loading

from datasets import load_dataset

ds = load_dataset("EmbodiedCity/EmbodiedNav-Bench")
print(ds["train"][0])

For evaluation, use dataset/navi_data.pkl from this repository or the GitHub project release instructions.

Notes

This is the dataset hosting repository. The GitHub project repository contains the project README, simulator setup, media examples, and evaluation code: https://github.com/serenditipy-AC/Embodied-Navigation-Bench

Hugging Face Dataset Viewer support for private dataset repositories depends on the account or organization plan. The Parquet table is included so the Table view can render when Dataset Viewer indexing is available.