Facebear commited on
Commit
cb56ce7
·
verified ·
1 Parent(s): 14734bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -1
README.md CHANGED
@@ -19,7 +19,117 @@ This dataset contains 1,500 episodes of cloth folding, collected using Agilex's
19
  - Performance: Near 100% success rate in completing the folding task
20
 
21
  # Usage
22
- To be updated🏗️
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  # Citation
25
  If you use this dataset in your research or for any related work, please cite the X-VLA Paper:
 
19
  - Performance: Near 100% success rate in completing the folding task
20
 
21
  # Usage
22
+ You can find `.hdf5` and `.mp4` files in each directory. The `.mp4` files are just used for visulization and are not used for training. The `.hdf5` files contains all necessary keys and data, including:
23
+
24
+ ## HDF5 file hierarchy
25
+ ```
26
+ ├── action # nx14 absolute bimanual joints, not used in our paper
27
+ ├── base_action # nx2 chassis actions, not used in our paper
28
+ ├── language_instruction # 🌟"fold the cloth"
29
+ ├── observations
30
+ │ ├── eef # nx14 absolute eef pos using euler angles to represent the rotation, not used in our paper
31
+ │ │ eef_quaternion # nx16 absolute eef pos using quaternion to represent the robtation, not used in our paper
32
+ │ │ eef_6d # 🌟nx20 absolute eef pos using rotate6d to represent the robtation
33
+ │ │ eef_left_time # 🌟nx1 the time stamp for left arm eef pos, can be used for resample or interpolation
34
+ │ │ eef_right_time # 🌟nx1 the time stamp for right arm eef pos, can be used for resample or interpolation
35
+ │ ├── qpos # nx14 absolute bimanual joints, not used in our paper
36
+ │ ├── qpos_left_time # nx1 the time stamp for left arm joint pos, can be used for resample or interpolation, not used in our paper
37
+ │ ├── qpos_right_time # nx1 the time stamp for right arm joint pos, can be used for resample or interpolation, not used in our paper
38
+ │ ├── qvel # nx14 bimanual joint velocity, not used in our paper
39
+ │ ├── effort # nx14 bimanual joint effort, not used in our paper
40
+ │ ├── images
41
+ │ │ ├── cam_high # 🌟the encoded head cam view, should be decoded using cv2
42
+ │ │ ├── cam_left_wrist # 🌟the encoded left wrist view, should be decoded using cv2
43
+ │ │ ├── cam_right_wrist # 🌟the encoded right wrist view, should be decoded using cv2
44
+ ├── time_stamp # the time stamp for each sample, not used in our paper
45
+ ```
46
+
47
+ How to read the hdf5 file:
48
+
49
+ ```
50
+ import h5py
51
+ import cv2
52
+ import io
53
+ from mmengine import fileio
54
+
55
+
56
+ path = "REPLACE TO YOUR HDF5 FILE PATH HERE"
57
+
58
+ # load the hdf5 file
59
+ value = fileio.get(path)
60
+ f = io.BytesIO(value)
61
+ h = h5py.File(f,'r')
62
+
63
+ # you can monitor the hdf5 hierarchy by print out its keys
64
+ print(h.keys())
65
+
66
+ # this is one example to read out the data, for example, the 'cam_high' data
67
+ head_view_bytes = h['observations/images/cam_high'][()] # 🌟 NOTE: we compress all images to bytes using cv2.imencode
68
+ head_view = cv2.imdecode(head_view_bytes, cv2.IMREAD_COLOR) # 🌟 NOTE: we should decode it back to RGB images for furture usage
69
+
70
+ #Then you can go free to use our data :)
71
+ # ...
72
+ # ...
73
+ ```
74
+
75
+ ## Visualize the data
76
+ You can find some dictionary have `.mp4` file for visulization. If you want to visualize all the `.hdf5` file, you can run the following data:
77
+
78
+ ```
79
+ from mmengine import fileio
80
+ import io
81
+ import h5py
82
+ import cv2
83
+ import matplotlib.pyplot as plt
84
+ from tqdm import tqdm
85
+ from PIL import Image
86
+ from IPython.display import display, Image as IPImage
87
+ from IPython.display import Video
88
+ import os
89
+ import imageio
90
+ import numpy as np
91
+
92
+ # 🌟 Just replace the path here, can run this script. This script will generate all the .mp4 files for the .hdf5 file
93
+ top_path = 'REPLACE TO YOUR XVLA-SOFT-FOLD PATH'
94
+ hdf5_files = fileio.list_dir_or_file(top_path, suffix='.hdf5', recursive=True, list_dir=False)
95
+
96
+ for hdf5_name in hdf5_files:
97
+ path = os.path.join(top_path, hdf5_name)
98
+ # Prepare OpenCV VideoWriter to save as MP4
99
+ video_path = path.replace('.hdf5', '.mp4')
100
+ fps = 30 # Adjust the FPS if needed
101
+ image_list = []
102
+ print(video_path)
103
+ if os.path.exists(video_path):
104
+ print(f"pass {video_path}, it already exists")
105
+ continue
106
+
107
+
108
+ value = fileio.get(path)
109
+ f = io.BytesIO(value)
110
+ h = h5py.File(f,'r')
111
+
112
+ images = h['/observations/images/cam_high'][()]
113
+ images_left = h['/observations/images/cam_left_wrist'][()]
114
+ images_right = h['/observations/images/cam_right_wrist'][()]
115
+ ep_len = images.shape[0]
116
+
117
+ for i in tqdm(range(ep_len)):
118
+ img = images[i]
119
+ img_left = images_left[i]
120
+ img_right = images_right[i]
121
+
122
+ img = cv2.imdecode(img, cv2.IMREAD_COLOR) # Decode image from bytes
123
+ img_left = cv2.imdecode(img_left, cv2.IMREAD_COLOR) # Decode image from bytes
124
+ img_right = cv2.imdecode(img_right, cv2.IMREAD_COLOR) # Decode image from bytes
125
+
126
+ img = np.concatenate([img, img_left, img_right], axis = 1)
127
+ image_list.append(img)
128
+
129
+ # Release the VideoWriter and show output
130
+ imageio.mimsave(video_path, image_list, fps=fps)
131
+ ```
132
+
133
 
134
  # Citation
135
  If you use this dataset in your research or for any related work, please cite the X-VLA Paper: