File size: 6,074 Bytes
861aa69
 
3a07fb0
 
 
 
 
 
861aa69
7031607
 
861aa69
7031607
 
 
 
 
 
 
 
d997925
 
 
d6523e1
d997925
7031607
cb56ce7
 
 
 
 
 
 
 
 
d997925
 
cb56ce7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0e91df7
cb56ce7
 
 
 
 
 
 
28d58d4
cb56ce7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4aab6f2
cb56ce7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7031607
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
---
license: mit
task_categories:
- robotics
language:
- en
size_categories:
- 1K<n<10K
---
# Cloth-Folding Dataset for X-VLA Paper

This dataset contains 1,500 episodes of cloth folding, collected using Agilex's robotic arm. It was used in the **X-VLA** paper for cloth-folding tasks, showcasing a near-perfect success rate in folding accuracy.

# Dataset Overview

- Total Episodes: ~1,500
- Task: Automated cloth folding
- Robot: Agilex Aloha
- Performance: Near 100% success rate in completing the folding task

# Hardware setup

We observed that the camera setup of the official Agilex Aloha platform is positioned relatively low, which prevents it from capturing the full cloth-folding process, where many frames fail to include the robot arms. To address this issue, we modified the camera setup accordingly.
You can find the `.stl`, `.step`, `.sldprt` files of our new camera mount, which can be used for 3D printing. The installation instruction can be found in the `camera_mount_install.md`.

# Usage
You can find `.hdf5` and `.mp4` files in each directory. The `.mp4` files are just used for visulization and are not used for training. The `.hdf5` files contains all necessary keys and data, including:

## HDF5 file hierarchy
```
├── action # nx14 absolute bimanual joints, not used in our paper
├── base_action # nx2 chassis actions, not used in our paper
├── language_instruction # 🌟"fold the cloth"
├── observations
│   ├── eef # nx14 absolute eef pos using euler angles to represent the rotation, not used in our paper
│   │   eef_quaternion # nx16 absolute eef pos using quaternion to represent the rotation, not used in our paper
│   │   eef_6d # 🌟nx20 absolute eef pos using rotate6d to represent the rotation
│   │   eef_left_time # 🌟nx1 the time stamp for left arm eef pos, can be used for resample or interpolation
│   │   eef_right_time # 🌟nx1 the time stamp for right arm eef pos, can be used for resample or interpolation
│   ├── qpos # nx14 absolute bimanual joints, not used in our paper
│   ├── qpos_left_time # nx1 the time stamp for left arm joint pos, can be used for resample or interpolation, not used in our paper
│   ├── qpos_right_time # nx1 the time stamp for right arm joint pos, can be used for resample or interpolation, not used in our paper
│   ├── qvel # nx14 bimanual joint velocity, not used in our paper
│   ├── effort # nx14 bimanual joint effort, not used in our paper
│   ├── images
│   │   ├── cam_high  # 🌟the encoded head cam view, should be decoded using cv2
│   │   ├── cam_left_wrist  # 🌟the encoded left wrist view, should be decoded using cv2
│   │   ├── cam_right_wrist  # 🌟the encoded right wrist view, should be decoded using cv2
├── time_stamp # the time stamp for each sample, not used in our paper
```

How to read the hdf5 file:

```
import h5py
import cv2
import io
from mmengine import fileio


path = "REPLACE TO YOUR HDF5 FILE PATH HERE"

# load the hdf5 file
value = fileio.get(path)
f = io.BytesIO(value)
h = h5py.File(f,'r')

# you can monitor the hdf5 hierarchy by print out its keys
print(h.keys())

# this is one example to read out the data, for example, the 'cam_high' data
head_view_bytes = h['observations/images/cam_high'][()]  # 🌟 NOTE: we compress all images to bytes using cv2.imencode
head_view = cv2.imdecode(head_view_bytes, cv2.IMREAD_COLOR)  # 🌟 NOTE: we should decode it back to RGB images for further usage

#Then you can go free to use our data :)
# ...
# ...
```

## Visualize the data
You can find some dictionary have `.mp4` file for visulization. If you want to visualize all the `.hdf5` file, you can run the following code:

```
from mmengine import fileio
import io
import h5py
import cv2
import matplotlib.pyplot as plt
from tqdm import tqdm
from PIL import Image
from IPython.display import display, Image as IPImage
from IPython.display import Video
import os
import imageio
import numpy as np

# 🌟 Just replace the path here, then run this script. This script will generate all the .mp4 files for the .hdf5 file
top_path = 'REPLACE TO YOUR XVLA-SOFT-FOLD PATH'
hdf5_files = fileio.list_dir_or_file(top_path, suffix='.hdf5', recursive=True, list_dir=False)

for hdf5_name in hdf5_files:
    path = os.path.join(top_path, hdf5_name)
    # Prepare OpenCV VideoWriter to save as MP4
    video_path = path.replace('.hdf5', '.mp4')
    fps = 30  # Adjust the FPS if needed
    image_list = []
    print(video_path)
    if os.path.exists(video_path):
        print(f"pass {video_path}, it already exists")
        continue
    
    
    value = fileio.get(path)
    f = io.BytesIO(value)
    h = h5py.File(f,'r')

    images = h['/observations/images/cam_high'][()]
    images_left = h['/observations/images/cam_left_wrist'][()]
    images_right = h['/observations/images/cam_right_wrist'][()]
    ep_len = images.shape[0]
    
    for i in tqdm(range(ep_len)):
        img = images[i]
        img_left = images_left[i]
        img_right = images_right[i]
        
        img = cv2.imdecode(img, cv2.IMREAD_COLOR)  # Decode image from bytes
        img_left = cv2.imdecode(img_left, cv2.IMREAD_COLOR)  # Decode image from bytes
        img_right = cv2.imdecode(img_right, cv2.IMREAD_COLOR)  # Decode image from bytes
        
        img = np.concatenate([img, img_left, img_right], axis = 1)
        image_list.append(img)

    # Release the VideoWriter and show output
    imageio.mimsave(video_path, image_list, fps=fps)
```


# Citation
If you use this dataset in your research or for any related work, please cite the X-VLA Paper:

```
@article{zheng2025x,
  title={X-VLA: Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model},
  author={Zheng, Jinliang and Li, Jianxiong and Wang, Zhihao and Liu, Dongxiu and Kang, Xirui and Feng, Yuchun and Zheng, Yinan and Zou, Jiayin and Chen, Yilun and Zeng, Jia and others},
  journal={arXiv preprint arXiv:2510.10274},
  year={2025}
}
```