serendipityAc2Win commited on
Commit
12fc4dd
·
verified ·
1 Parent(s): ae7171e

Clarify viewer table naming

Browse files

Rename the Dataset Viewer Parquet file and split from train to viewer, and update the dataset card to clarify that the Parquet file is for visualization rather than a training split.

README.md CHANGED
@@ -14,12 +14,12 @@ tags:
14
  - multimodal-reasoning
15
  - spatial-reasoning
16
  size_categories:
17
- - n<1K
18
  configs:
19
  - config_name: default
20
  data_files:
21
- - split: train
22
- path: data/train-00000-of-00001.parquet
23
  ---
24
 
25
  # EmbodiedNav-Bench
@@ -27,13 +27,13 @@ configs:
27
  [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github&logoColor=white)](https://github.com/serenditipy-AC/Embodied-Navigation-Bench)
28
  [![arXiv](https://img.shields.io/badge/arXiv-2604.07973-b31b1b.svg?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2604.07973)
29
 
30
- EmbodiedNav-Bench is a goal-oriented embodied navigation benchmark for evaluating spatial action in urban 3D airspace. The dataset provides natural-language navigation goals, initial drone poses, target positions, and ground-truth 3D trajectories for embodied navigation evaluation.
31
 
32
  This Hugging Face repository hosts the dataset artifacts. The accompanying project code, simulator setup, media examples, and evaluation scripts are maintained in the GitHub repository: https://github.com/serenditipy-AC/Embodied-Navigation-Bench
33
 
34
  ## Dataset Summary
35
 
36
- The current release contains 300 navigation trajectories sampled from the EmbodiedNav-Bench benchmark. Each sample corresponds to one goal-oriented navigation task in an urban 3D environment, with a natural-language goal description and a human-collected ground-truth trajectory.
37
 
38
  The dataset is intended for evaluating embodied navigation, spatial reasoning, and multimodal decision-making models in urban airspace scenarios.
39
 
@@ -43,7 +43,7 @@ The dataset is intended for evaluating embodied navigation, spatial reasoning, a
43
  | :-- | :-- |
44
  | `dataset/navi_data.pkl` | Canonical PKL file for evaluation. |
45
  | `dataset/navi_data_preview.json` | Human-readable JSON preview of the PKL content. |
46
- | `data/train-00000-of-00001.parquet` | Parquet conversion for the Hugging Face Dataset Viewer table. |
47
 
48
  ## Data Fields
49
 
@@ -60,7 +60,7 @@ The canonical PKL file stores a list of Python dictionaries. Each sample contain
60
  | `gt_traj` | `float[N,3]` | Ground-truth trajectory points. |
61
  | `gt_traj_len` | `float` | Ground-truth trajectory length. |
62
 
63
- The Parquet table includes the same structured fields and additional convenience columns such as `sample_index`, `start_x`, `start_y`, `start_z`, `target_x`, `target_y`, `target_z`, and `gt_traj_num_points`.
64
 
65
  ## Usage
66
 
@@ -69,8 +69,8 @@ The Dataset Viewer-compatible table can be loaded with the `datasets` library:
69
  ```python
70
  from datasets import load_dataset
71
 
72
- ds = load_dataset("EmbodiedCity/EmbodiedNav-Bench")
73
- print(ds["train"][0])
74
  ```
75
 
76
  For evaluation, use `dataset/navi_data.pkl` as the canonical data file and follow the setup instructions in the GitHub project repository.
 
14
  - multimodal-reasoning
15
  - spatial-reasoning
16
  size_categories:
17
+ - 1K<n<10K
18
  configs:
19
  - config_name: default
20
  data_files:
21
+ - split: viewer
22
+ path: data/viewer-00000-of-00001.parquet
23
  ---
24
 
25
  # EmbodiedNav-Bench
 
27
  [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github&logoColor=white)](https://github.com/serenditipy-AC/Embodied-Navigation-Bench)
28
  [![arXiv](https://img.shields.io/badge/arXiv-2604.07973-b31b1b.svg?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2604.07973)
29
 
30
+ EmbodiedNav-Bench is a goal-oriented embodied navigation benchmark for evaluating spatial action in urban 3D airspace. The benchmark contains 5,037 high-quality navigation trajectories with natural-language navigation goals, initial drone poses, target positions, and ground-truth 3D trajectories.
31
 
32
  This Hugging Face repository hosts the dataset artifacts. The accompanying project code, simulator setup, media examples, and evaluation scripts are maintained in the GitHub repository: https://github.com/serenditipy-AC/Embodied-Navigation-Bench
33
 
34
  ## Dataset Summary
35
 
36
+ The benchmark contains 5,037 goal-oriented navigation trajectories. Each sample corresponds to one navigation task in an urban 3D environment, with a natural-language goal description and a human-collected ground-truth trajectory.
37
 
38
  The dataset is intended for evaluating embodied navigation, spatial reasoning, and multimodal decision-making models in urban airspace scenarios.
39
 
 
43
  | :-- | :-- |
44
  | `dataset/navi_data.pkl` | Canonical PKL file for evaluation. |
45
  | `dataset/navi_data_preview.json` | Human-readable JSON preview of the PKL content. |
46
+ | `data/viewer-00000-of-00001.parquet` | Parquet representation for the Hugging Face Dataset Viewer table. |
47
 
48
  ## Data Fields
49
 
 
60
  | `gt_traj` | `float[N,3]` | Ground-truth trajectory points. |
61
  | `gt_traj_len` | `float` | Ground-truth trajectory length. |
62
 
63
+ The Parquet table includes the same structured fields and additional convenience columns such as `sample_index`, `start_x`, `start_y`, `start_z`, `target_x`, `target_y`, `target_z`, and `gt_traj_num_points`. It is provided for browsing and visualization in the Hugging Face Dataset Viewer and should not be interpreted as a training split.
64
 
65
  ## Usage
66
 
 
69
  ```python
70
  from datasets import load_dataset
71
 
72
+ ds = load_dataset("EmbodiedCity/EmbodiedNav-Bench", split="viewer")
73
+ print(ds[0])
74
  ```
75
 
76
  For evaluation, use `dataset/navi_data.pkl` as the canonical data file and follow the setup instructions in the GitHub project repository.
data/{train-00000-of-00001.parquet → viewer-00000-of-00001.parquet} RENAMED
File without changes