zuzuzzy commited on
Commit
207f59d
·
verified ·
1 Parent(s): 6737d31

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +102 -3
README.md CHANGED
@@ -1,3 +1,102 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - robotics
7
+ - vision-language-action
8
+ - neuro-symbolic
9
+ - manipulation
10
+ - primitive-annotations
11
+ task_categories:
12
+ - robotics
13
+ size_categories:
14
+ - 1K<n<10K
15
+ ---
16
+
17
+ # NS-VLA Dataset: Primitive-Annotated Robotic Manipulation Data
18
+
19
+ <div align="center">
20
+
21
+ [![arXiv](https://img.shields.io/badge/arXiv-XXXX.XXXXX-b31b1b.svg)](https://arxiv.org/abs/XXXX.XXXXX)
22
+ [![GitHub](https://img.shields.io/badge/GitHub-Code-black)](https://github.com/Zuzuzzy/NS-VLA)
23
+ [![Project Page](https://img.shields.io/badge/Project-Page-blue)](https://zuzuzzy.github.io/NS-VLA/)
24
+
25
+ </div>
26
+
27
+ ## Dataset Description
28
+
29
+ This dataset provides **primitive annotations** for robotic manipulation demonstrations used to train and evaluate the NS-VLA framework. Each demonstration trajectory is segmented and labeled with structured manipulation primitives.
30
+
31
+ ## Primitive Vocabulary
32
+
33
+ | Primitive | Description | Frequency |
34
+ |:---|:---|:---:|
35
+ | `pick` | Grasp a target object | 44.4% |
36
+ | `place_in` | Place object inside a container | 25.0% |
37
+ | `place_on` | Place object on a surface | 16.7% |
38
+ | `close` | Close an appliance (e.g., microwave) | 5.6% |
39
+ | `place_rel` | Place relative to another object | — |
40
+ | `turn_on` | Activate an appliance (e.g., stove) | — |
41
+ | `open` | Open an appliance door | — |
42
+
43
+ ## Dataset Structure
44
+
45
+ ```
46
+ NS-VLA-Dataset/
47
+ ├── libero/
48
+ │ ├── spatial/ # LIBERO-Spatial task annotations
49
+ │ ├── object/ # LIBERO-Object task annotations
50
+ │ ├── goal/ # LIBERO-Goal task annotations
51
+ │ └── long/ # LIBERO-Long task annotations
52
+ ├── calvin/
53
+ │ └── ABC_D/ # CALVIN ABC→D split annotations
54
+ └── metadata.json # Dataset statistics and splits
55
+ ```
56
+
57
+ ## Annotation Format
58
+
59
+ Each annotation file is a JSON with the following structure:
60
+
61
+ ```json
62
+ {
63
+ "task_id": "libero_spatial_01",
64
+ "instruction": "put the white mug on the left plate",
65
+ "primitives": [
66
+ {"op": "pick", "args": {"object": "white_mug"}, "start": 0, "end": 45},
67
+ {"op": "place_on", "args": {"object": "white_mug", "support": "left_plate"}, "start": 46, "end": 102}
68
+ ],
69
+ "total_steps": 102
70
+ }
71
+ ```
72
+
73
+ ## Usage
74
+
75
+ > ⚠️ **Note**: Dataset files will be released upon paper acceptance. Please check back soon.
76
+
77
+ ```python
78
+ from datasets import load_dataset
79
+
80
+ dataset = load_dataset("Zuzuzzy/NS-VLA-Dataset")
81
+ ```
82
+
83
+ ## Associated Benchmarks
84
+
85
+ - [LIBERO](https://github.com/Lifelong-Robot-Learning/LIBERO) — Language-conditioned robotic manipulation
86
+ - [LIBERO-Plus](https://arxiv.org/abs/2510.13626) — Robustness evaluation with perturbations
87
+ - [CALVIN](https://github.com/mees/calvin) — Long-horizon language-conditioned manipulation
88
+
89
+ ## Citation
90
+
91
+ ```bibtex
92
+ @article{zhu2026nsvla,
93
+ title={NS-VLA: Towards Neuro-Symbolic Vision-Language-Action Models},
94
+ author={Zhu, Ziyue and Wu, Shangyang and Zhao, Shuai and Zhao, Zhiqiu and Li, Shengjie and Wang, Yi and Li, Fang and Luo, Haoran},
95
+ journal={arXiv preprint arXiv:XXXX.XXXXX},
96
+ year={2026}
97
+ }
98
+ ```
99
+
100
+ ## License
101
+
102
+ This dataset is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).