OpenDriveLab-org commited on
Commit
87c5771
·
verified ·
1 Parent(s): 5926b54

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +109 -106
README.md CHANGED
@@ -8,6 +8,7 @@ configs:
8
  - config_name: default
9
  data_files: FlattenFold/base/data/chunk-000/episode_000000.parquet
10
  ---
 
11
  <span style="color: red; font-weight: bold; font-size: 24px;">⚠️ !!! 等待信息,填充链接</span>
12
  <div align="center">
13
  <a href="">
@@ -21,19 +22,21 @@ configs:
21
  </a>
22
  </div>
23
 
 
 
24
 
25
- # Contents
26
  - [About the Dataset](#about-the-dataset)
 
 
27
  - [Dataset Structure](#dataset-structure)
28
  - [Folder hierarchy](#folder-hierarchy)
29
  - [Details](#details)
30
- - [Download the Dataset](#download-the-dataset)
31
- - [Load the Dataset](#get-started)
32
  - [License and Citation](#license-and-citation)
33
 
34
- # [About the Dataset](#contents)
35
  - This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
36
- - **~130 hours** real world scenarios
37
  - **Main Tasks**
38
  - ***FlattenFold***
39
  - Single task
@@ -51,15 +54,92 @@ configs:
51
  - If it is a T-shirt, fold the garment
52
  - If it is a dress shirt, expose the collar, then push it to one side of the table
53
  - **Count of the dataset**
54
- | Task | Base (episodes) | DAgger (episodes) | Total |
55
- |------|-----------------|-------------------|-------|
56
- | FlattenFold | 3,055 | 3,457 | 6,512 |
57
- | HangCloth | 6954 | 686 | 7640 |
58
- | TeeShirtSort | 5988 | - | 5988 |
59
- | **Total** | **19,608** | **4,143** | **23,751** |
60
- # [Dataset Structure](#contents)
61
 
62
- ## [Folder hierarchy](#contents)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
  Under each task directory, data is partitioned into two subsets: base and dagger.
64
  - base
65
  contains
@@ -108,27 +188,27 @@ Kai0-data/
108
  ```
109
 
110
  <a id='Details'></a>
111
- ## [Details](#contents)
112
- ### info.json
113
  the basic struct of the [info.json](#meta/info.json)
114
  ```json
115
  {
116
  "codebase_version": "v2.1",
117
  "robot_type": "agilex",
118
- "total_episodes": ...,
119
- "total_frames": ...,
120
- "total_tasks": ...,
121
- "total_videos": ...,
122
- "total_chunks": ...,
123
- "chunks_size": ...,
124
- "fps": ...,
125
- "splits": {
126
- "train": ...
127
  },
128
  "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
129
  "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
130
  "features": {
131
- "observation.images.top_head": {
132
  "dtype": "video",
133
  "shape": [
134
  480,
@@ -151,10 +231,10 @@ the basic struct of the [info.json](#meta/info.json)
151
  "has_audio": false
152
  }
153
  },
154
- "observation.images.hand_left": {
155
  ...
156
  },
157
- "observation.images.hand_right": {
158
  ...
159
  },
160
  "observation.state": {
@@ -210,7 +290,7 @@ the basic struct of the [info.json](#meta/info.json)
210
  }
211
  ```
212
 
213
- ### [Parquet file format](#contents)
214
  | Field Name | shape | Meaning |
215
  |------------|-------------|-------------|
216
  | observation.state | [N, 14] |left `[:, :6]`, right `[:, 7:13]`, joint angle<br> left`[:, 6]`, right `[:, 13]` , gripper open range|
@@ -221,86 +301,9 @@ the basic struct of the [info.json](#meta/info.json)
221
  | index | [N, 1] | Global unique index across all frames in the dataset |
222
  | task_index | [N, 1] | Index identifying the task type being performed |
223
 
224
- ## [tasks.jsonl](#FlattenFold/meta/tasks.jsonl)
225
  Contains task language prompts (natural language instructions) that specify the manipulation task to be performed. Each entry maps a task_index to its corresponding task description, which can be used for language-conditioned policy training.
226
- # [Download the Dataset](#contents)
227
- ### Python Script
228
-
229
- ```python
230
- from huggingface_hub import hf_hub_download, snapshot_download
231
- from datasets import load_dataset
232
-
233
- # Download a single file
234
- hf_hub_download(
235
- repo_id="OpenDriveLab-org/kai0",
236
- filename="episodes.jsonl",
237
- subfolder="meta",
238
- repo_type="dataset",
239
- local_dir="where/you/want/to/save"
240
- )
241
-
242
- # Download a specific folder
243
- snapshot_download(
244
- repo_id="OpenDriveLab-org/kai0",
245
- local_dir="/where/you/want/to/save",
246
- repo_type="dataset",
247
- allow_patterns=["data/*"]
248
- )
249
-
250
- # Load the entire dataset
251
- dataset = load_dataset("OpenDriveLab-org/kai0")
252
- ```
253
-
254
- ### Terminal (CLI)
255
-
256
- ```bash
257
- # Download a single file
258
- hf download OpenDriveLab-org/kai0 \
259
- --include "meta/info.json" \
260
- --repo-type dataset \
261
- --local-dir "/where/you/want/to/save"
262
-
263
- # Download a specific folder
264
- hf download OpenDriveLab-org/kai0 \
265
- --repo-type dataset \
266
- --include "meta/*" \
267
- --local-dir "/where/you/want/to/save"
268
-
269
- # Download the entire dataset
270
- hf download OpenDriveLab-org/kai0 \
271
- --repo-type dataset \
272
- --local-dir "/where/you/want/to/save"
273
- ```
274
-
275
- # [Load the dataset](#contents)
276
-
277
- ## For LeRobot version < 0.4.0
278
-
279
- Choose the appropriate import based on your version:
280
-
281
- | Version | Import Path |
282
- |---------|-------------|
283
- | `<= 0.1.0` | `from lerobot.common.datasets.lerobot_dataset import LeRobotDataset` |
284
- | `> 0.1.0` and `< 0.4.0` | `from lerobot.datasets.lerobot_dataset import LeRobotDataset` |
285
-
286
- ```python
287
- # For version <= 0.1.0
288
- from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
289
-
290
- # For version > 0.1.0 and < 0.4.0
291
- from lerobot.datasets.lerobot_dataset import LeRobotDataset
292
-
293
- # Load the dataset
294
- dataset = LeRobotDataset(repo_id='where/the/dataset/you/stored')
295
- ```
296
 
297
- ## For LeRobot version >= 0.4.0
298
-
299
- You need to migrate the dataset from v2.1 to v3.0 first. See the official documentation: [Migrate the dataset from v2.1 to v3.0](https://huggingface.co/docs/lerobot/lerobot-dataset-v3)
300
-
301
- ```bash
302
- python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=<HF_USER/DATASET_ID>
303
- ```
304
  <span style="color: red; font-weight: bold; font-size: 24px;">⚠️ !!! 等待信息填充</span>
305
  # License and Citation
306
  All the data and code within this repo are under [](). Please consider citing our project if it helps your research.
 
8
  - config_name: default
9
  data_files: FlattenFold/base/data/chunk-000/episode_000000.parquet
10
  ---
11
+ # KAI0
12
  <span style="color: red; font-weight: bold; font-size: 24px;">⚠️ !!! 等待信息,填充链接</span>
13
  <div align="center">
14
  <a href="">
 
22
  </a>
23
  </div>
24
 
25
+ # TODO
26
+ - [ ] The advantage label will be coming soon.
27
 
28
+ ## Contents
29
  - [About the Dataset](#about-the-dataset)
30
+ - [Load the Dataset](#get-started)
31
+ - [Download the Dataset](#download-the-dataset)
32
  - [Dataset Structure](#dataset-structure)
33
  - [Folder hierarchy](#folder-hierarchy)
34
  - [Details](#details)
 
 
35
  - [License and Citation](#license-and-citation)
36
 
37
+ ## [About the Dataset](#contents)
38
  - This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
39
+ - **~134 hours** real world scenarios
40
  - **Main Tasks**
41
  - ***FlattenFold***
42
  - Single task
 
54
  - If it is a T-shirt, fold the garment
55
  - If it is a dress shirt, expose the collar, then push it to one side of the table
56
  - **Count of the dataset**
 
 
 
 
 
 
 
57
 
58
+ | Task | Base (episodes count/hours) | DAgger (episodes count/hours) | Total(episodes count/hours) |
59
+ |------|-----------------------------|-------------------------------|-----------------------------|
60
+ | FlattenFold | 3,055/~42 hours | 3,457/ ~13 Hours | 6,512 /~55 hours |
61
+ | HangCloth | 6954/~61 hours | 686/~12 hours | 7640/~73 hours |
62
+ | TeeShirtSort | 5988/~31 hours | 769/~22 hours | 6757/~53 hours |
63
+ | **Total** | **15,997/~134 hours** | **4,912/~47 hours** | **20,909/~181 hours** |
64
+
65
+ ## [Load the dataset](#contents)
66
+ ### For LeRobot version < 0.4.0
67
+ Choose the appropriate import based on your version:
68
+
69
+ | Version | Import Path |
70
+ |------------------------|-------------|
71
+ | `<= 0.1.0` | `from lerobot.common.datasets.lerobot_dataset import LeRobotDataset` |
72
+ | `> 0.1.0` and `< 0.4.0` | `from lerobot.datasets.lerobot_dataset import LeRobotDataset` |
73
+
74
+ ```python
75
+ # For version <= 0.1.0
76
+ from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
77
+
78
+ # For version > 0.1.0 and < 0.4.0
79
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
80
+
81
+ # Load the dataset
82
+ dataset = LeRobotDataset(repo_id='where/the/dataset/you/stored')
83
+ ```
84
+
85
+ ### For LeRobot version >= 0.4.0
86
+
87
+ You need to migrate the dataset from v2.1 to v3.0 first. See the official documentation: [Migrate the dataset from v2.1 to v3.0](https://huggingface.co/docs/lerobot/lerobot-dataset-v3)
88
+
89
+ ```bash
90
+ python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=<HF_USER/DATASET_ID>
91
+ ```
92
+
93
+ ## [Download the Dataset](#contents)
94
+ ### Python Script
95
+ ```python
96
+ from huggingface_hub import hf_hub_download, snapshot_download
97
+ from datasets import load_dataset
98
+
99
+ # Download a single file
100
+ hf_hub_download(
101
+ repo_id="OpenDriveLab-org/kai0",
102
+ filename="episodes.jsonl",
103
+ subfolder="meta",
104
+ repo_type="dataset",
105
+ local_dir="where/you/want/to/save"
106
+ )
107
+
108
+ # Download a specific folder
109
+ snapshot_download(
110
+ repo_id="OpenDriveLab-org/kai0",
111
+ local_dir="/where/you/want/to/save",
112
+ repo_type="dataset",
113
+ allow_patterns=["data/*"]
114
+ )
115
+
116
+ # Load the entire dataset
117
+ dataset = load_dataset("OpenDriveLab-org/kai0")
118
+ ```
119
+
120
+ ### Terminal (CLI)
121
+ ```bash
122
+ # Download a single file
123
+ hf download OpenDriveLab-org/kai0 \
124
+ --include "meta/info.json" \
125
+ --repo-type dataset \
126
+ --local-dir "/where/you/want/to/save"
127
+
128
+ # Download a specific folder
129
+ hf download OpenDriveLab-org/kai0 \
130
+ --repo-type dataset \
131
+ --include "meta/*" \
132
+ --local-dir "/where/you/want/to/save"
133
+
134
+ # Download the entire dataset
135
+ hf download OpenDriveLab-org/kai0 \
136
+ --repo-type dataset \
137
+ --local-dir "/where/you/want/to/save"
138
+ ```
139
+
140
+ ## [Dataset Structure](#contents)
141
+
142
+ ### [Folder hierarchy](#contents)
143
  Under each task directory, data is partitioned into two subsets: base and dagger.
144
  - base
145
  contains
 
188
  ```
189
 
190
  <a id='Details'></a>
191
+ ### [Details](#contents)
192
+ #### info.json
193
  the basic struct of the [info.json](#meta/info.json)
194
  ```json
195
  {
196
  "codebase_version": "v2.1",
197
  "robot_type": "agilex",
198
+ "total_episodes": ..., # the total episodes in the dataset
199
+ "total_frames": ..., # The total number of video frames in any single camera perspective
200
+ "total_tasks": ..., # Total number of tasks
201
+ "total_videos": ..., # The total number of videos from all camera perspectives in the dataset
202
+ "total_chunks": ..., # The number of chunks in the dataset
203
+ "chunks_size": ..., # The max number of episodes in a chunk
204
+ "fps": ..., # Video frame rate per second
205
+ "splits": { # how to split the dataset
206
+ "train": ...
207
  },
208
  "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
209
  "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
210
  "features": {
211
+ "observation.images.top_head": { # the camera perspective
212
  "dtype": "video",
213
  "shape": [
214
  480,
 
231
  "has_audio": false
232
  }
233
  },
234
+ "observation.images.hand_left": { # the camera perspective
235
  ...
236
  },
237
+ "observation.images.hand_right": { # the camera perspective
238
  ...
239
  },
240
  "observation.state": {
 
290
  }
291
  ```
292
 
293
+ #### [Parquet file format](#contents)
294
  | Field Name | shape | Meaning |
295
  |------------|-------------|-------------|
296
  | observation.state | [N, 14] |left `[:, :6]`, right `[:, 7:13]`, joint angle<br> left`[:, 6]`, right `[:, 13]` , gripper open range|
 
301
  | index | [N, 1] | Global unique index across all frames in the dataset |
302
  | task_index | [N, 1] | Index identifying the task type being performed |
303
 
304
+ ### [tasks.jsonl](#FlattenFold/meta/tasks.jsonl)
305
  Contains task language prompts (natural language instructions) that specify the manipulation task to be performed. Each entry maps a task_index to its corresponding task description, which can be used for language-conditioned policy training.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
306
 
 
 
 
 
 
 
 
307
  <span style="color: red; font-weight: bold; font-size: 24px;">⚠️ !!! 等待信息填充</span>
308
  # License and Citation
309
  All the data and code within this repo are under [](). Please consider citing our project if it helps your research.