serendipityAc2Win commited on
Commit
ece49a2
·
verified ·
1 Parent(s): 40fe28f

Update dataset card links and citation

Browse files

Add GitHub and arXiv badges plus the arXiv citation to the Hugging Face dataset card.

Files changed (1) hide show
  1. README.md +48 -21
README.md CHANGED
@@ -24,32 +24,47 @@ configs:
24
 
25
  # EmbodiedNav-Bench
26
 
27
- EmbodiedNav-Bench is a goal-oriented embodied navigation benchmark for evaluating spatial action in urban 3D airspace. This Hugging Face dataset repository hosts the released navigation sample data and a Dataset Viewer compatible table. Code, simulator instructions, examples, and evaluation scripts are maintained in the GitHub project repository: https://github.com/serenditipy-AC/Embodied-Navigation-Bench
 
28
 
29
- ## Files
30
 
31
- - `dataset/navi_data.pkl`: canonical PKL file for evaluation.
32
- - `dataset/navi_data_preview.json`: human-readable preview of the PKL content.
33
- - `data/train-00000-of-00001.parquet`: Parquet conversion for the Hugging Face Dataset Viewer Table.
34
 
35
- ## Dataset Contents
36
 
37
- The current release contains 300 public example trajectories. Each row/sample corresponds to one navigation trajectory with a natural-language goal, initial drone pose, target position, and ground-truth 3D trajectory.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
  | Field | Type | Description |
40
  | :-- | :-- | :-- |
41
- | `folder` | `str` | Scene folder identifier |
42
- | `start_pos` | `float[3]` | Initial drone world position `(x, y, z)` |
43
- | `start_rot` | `float[3]` | Initial drone orientation `(roll, pitch, yaw)` in radians |
44
- | `start_ang` | `float` | Initial camera gimbal angle in degrees |
45
- | `task_desc` | `str` | Natural-language navigation instruction |
46
- | `target_pos` | `float[3]` | Target world position `(x, y, z)` |
47
- | `gt_traj` | `float[N,3]` | Ground-truth trajectory points |
48
- | `gt_traj_len` | `float` | Ground-truth trajectory length |
 
 
49
 
50
- The Parquet table additionally includes convenience columns such as `sample_index`, `start_x`, `start_y`, `start_z`, `target_x`, `target_y`, `target_z`, and `gt_traj_num_points` to make browsing and filtering easier.
51
 
52
- ## Loading
53
 
54
  ```python
55
  from datasets import load_dataset
@@ -58,10 +73,22 @@ ds = load_dataset("EmbodiedCity/EmbodiedNav-Bench")
58
  print(ds["train"][0])
59
  ```
60
 
61
- For evaluation, use `dataset/navi_data.pkl` from this repository or the GitHub project release instructions.
62
 
63
- ## Notes
64
 
65
- This is the dataset hosting repository. The GitHub project repository contains the project README, simulator setup, media examples, and evaluation code: https://github.com/serenditipy-AC/Embodied-Navigation-Bench
66
 
67
- Hugging Face Dataset Viewer support for private dataset repositories depends on the account or organization plan. The Parquet table is included so the Table view can render when Dataset Viewer indexing is available.
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  # EmbodiedNav-Bench
26
 
27
+ [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github&logoColor=white)](https://github.com/serenditipy-AC/Embodied-Navigation-Bench)
28
+ [![arXiv](https://img.shields.io/badge/arXiv-2505.19789-b31b1b.svg?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2505.19789)
29
 
30
+ EmbodiedNav-Bench is a goal-oriented embodied navigation benchmark for evaluating spatial action in urban 3D airspace. The dataset provides natural-language navigation goals, initial drone poses, target positions, and ground-truth 3D trajectories for embodied navigation evaluation.
31
 
32
+ This Hugging Face repository hosts the dataset artifacts. The accompanying project code, simulator setup, media examples, and evaluation scripts are maintained in the GitHub repository: https://github.com/serenditipy-AC/Embodied-Navigation-Bench
 
 
33
 
34
+ ## Dataset Summary
35
 
36
+ The current release contains 300 navigation trajectories sampled from the EmbodiedNav-Bench benchmark. Each sample corresponds to one goal-oriented navigation task in an urban 3D environment, with a natural-language goal description and a human-collected ground-truth trajectory.
37
+
38
+ The dataset is intended for evaluating embodied navigation, spatial reasoning, and multimodal decision-making models in urban airspace scenarios.
39
+
40
+ ## Repository Contents
41
+
42
+ | Path | Description |
43
+ | :-- | :-- |
44
+ | `dataset/navi_data.pkl` | Canonical PKL file for evaluation. |
45
+ | `dataset/navi_data_preview.json` | Human-readable JSON preview of the PKL content. |
46
+ | `data/train-00000-of-00001.parquet` | Parquet conversion for the Hugging Face Dataset Viewer table. |
47
+
48
+ ## Data Fields
49
+
50
+ The canonical PKL file stores a list of Python dictionaries. Each sample contains the following fields:
51
 
52
  | Field | Type | Description |
53
  | :-- | :-- | :-- |
54
+ | `folder` | `str` | Scene folder identifier. |
55
+ | `start_pos` | `float[3]` | Initial drone world position `(x, y, z)`. |
56
+ | `start_rot` | `float[3]` | Initial drone orientation `(roll, pitch, yaw)` in radians. |
57
+ | `start_ang` | `float` | Initial camera gimbal angle in degrees. |
58
+ | `task_desc` | `str` | Natural-language navigation instruction. |
59
+ | `target_pos` | `float[3]` | Target world position `(x, y, z)`. |
60
+ | `gt_traj` | `float[N,3]` | Ground-truth trajectory points. |
61
+ | `gt_traj_len` | `float` | Ground-truth trajectory length. |
62
+
63
+ The Parquet table includes the same structured fields and additional convenience columns such as `sample_index`, `start_x`, `start_y`, `start_z`, `target_x`, `target_y`, `target_z`, and `gt_traj_num_points`.
64
 
65
+ ## Usage
66
 
67
+ The Dataset Viewer-compatible table can be loaded with the `datasets` library:
68
 
69
  ```python
70
  from datasets import load_dataset
 
73
  print(ds["train"][0])
74
  ```
75
 
76
+ For evaluation, use `dataset/navi_data.pkl` as the canonical data file and follow the setup instructions in the GitHub project repository.
77
 
78
+ ## License
79
 
80
+ This dataset is released under the CC BY 4.0 license.
81
 
82
+ ## Citation
83
+
84
+ ```bibtex
85
+ @misc{liu2026rlbringvlageneralization,
86
+ title={What Can RL Bring to VLA Generalization? An Empirical Study},
87
+ author={Jijia Liu and Feng Gao and Bingwen Wei and Xinlei Chen and Qingmin Liao and Yi Wu and Chao Yu and Yu Wang},
88
+ year={2026},
89
+ eprint={2505.19789},
90
+ archivePrefix={arXiv},
91
+ primaryClass={cs.LG},
92
+ url={https://arxiv.org/abs/2505.19789},
93
+ }
94
+ ```