Create README.md (#1)
Browse files- Create README.md (7c950319fc677b463f23063a2f8bfe58eb805413)
Co-authored-by: Ziyu Wang <yuz1wan@users.noreply.huggingface.co>
README.md
ADDED
|
@@ -0,0 +1,144 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
tags:
|
| 4 |
+
- hdf5
|
| 5 |
+
- imitation-learning
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
+
# Dataset Card for GM-100 Xtrainer Part (HDF5 Format)
|
| 9 |
+
[](https://www.rhos.ai/research/gm-100)
|
| 10 |
+
[](https://arxiv.org/abs/2601.11421)
|
| 11 |
+
|
| 12 |
+
This is a part of the [Great March 100 (GM-100)](https://www.rhos.ai/research/gm-100) Project.
|
| 13 |
+
Raw teleoperation data collected using the Dobot Xtrainer robot. This dataset is stored in HDF5 format.
|
| 14 |
+
|
| 15 |
+
## Dataset Description
|
| 16 |
+
|
| 17 |
+
This repository contains raw robotic manipulation data stored in **HDF5 (`.hdf5`)** format. It is designed for imitation learning tasks using the Dobot Xtrainer hardware stack.
|
| 18 |
+
|
| 19 |
+
- **Robot Platform:** Dobot Xtrainer
|
| 20 |
+
- **Data Format:** HDF5
|
| 21 |
+
- **Camera Views:** Top, Left Wrist, Right Wrist
|
| 22 |
+
|
| 23 |
+
## File Structure
|
| 24 |
+
|
| 25 |
+
The dataset consists of individual HDF5 files, where each file represents one trajectory (episode):
|
| 26 |
+
|
| 27 |
+
```text
|
| 28 |
+
.
|
| 29 |
+
├── task_00001/ # Task 1 Directory
|
| 30 |
+
│ ├── train/ # Training episodes for Task 1
|
| 31 |
+
│ │ ├── episode_init_0.hdf5
|
| 32 |
+
│ │ ├── episode_init_1.hdf5
|
| 33 |
+
│ │ └── ...
|
| 34 |
+
│ └── eval/ # Evaluation/Test episodes for Task 1
|
| 35 |
+
│ ├── episode_init_0.hdf5
|
| 36 |
+
│ └── ...
|
| 37 |
+
├── task_00002/ # Task 2 Directory
|
| 38 |
+
│ ├── ...
|
| 39 |
+
└── ...
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
## HDF5 Internal Structure
|
| 43 |
+
|
| 44 |
+
Each `.hdf5` file contains the following groups and datasets. The sequence length is denoted by `T`.
|
| 45 |
+
|
| 46 |
+
### Root Group
|
| 47 |
+
|
| 48 |
+
| Key | Shape | Dtype | Description |
|
| 49 |
+
| --- | --- | --- | --- |
|
| 50 |
+
| `/action` | `(T, 14)` | float | Target joint positions (the command sent to the robot) |
|
| 51 |
+
|
| 52 |
+
### Observations Group (`/observations`)
|
| 53 |
+
|
| 54 |
+
| Key | Shape | Dtype | Description |
|
| 55 |
+
| --- | --- | --- | --- |
|
| 56 |
+
| `/observations/qpos` | `(T, 14)` | float | Actual joint positions (state) |
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
### Images Group (`/observations/images`)
|
| 60 |
+
|
| 61 |
+
Images are stored under `/observations/images/`.
|
| 62 |
+
> **Note:** Images are stored as compressed binary strings (e.g., JPEG buffers) to save space. They must be decoded before use.
|
| 63 |
+
|
| 64 |
+
| Key | Shape | Type | Description |
|
| 65 |
+
| --- | --- | --- | --- |
|
| 66 |
+
| `top` | `(T, 480, 640, 3)` | uint8 (binary string) | Top static camera RGB |
|
| 67 |
+
| `left_wrist` | `(T, 480, 640, 3)` | uint8 (binary string) | Left wrist camera RGB |
|
| 68 |
+
| `right_wrist` | `(T, 480, 640, 3)` | uint8 (binary string) | Right wrist camera RGB |
|
| 69 |
+
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
## Hardware Specification
|
| 73 |
+
|
| 74 |
+
### Joint Order (14-DOF)
|
| 75 |
+
|
| 76 |
+
The state and action vectors correspond to the following motors in order:
|
| 77 |
+
|
| 78 |
+
1. `right_waist`
|
| 79 |
+
2. `right_shoulder`
|
| 80 |
+
3. `right_elbow`
|
| 81 |
+
4. `right_forearm_roll`
|
| 82 |
+
5. `right_wrist_angle`
|
| 83 |
+
6. `right_wrist_rotate`
|
| 84 |
+
7. `right_gripper`
|
| 85 |
+
8. `left_waist`
|
| 86 |
+
9. `left_shoulder`
|
| 87 |
+
10. `left_elbow`
|
| 88 |
+
11. `left_forearm_roll`
|
| 89 |
+
12. `left_wrist_angle`
|
| 90 |
+
13. `left_wrist_rotate`
|
| 91 |
+
14. `left_gripper`
|
| 92 |
+
|
| 93 |
+
## Usage Example
|
| 94 |
+
|
| 95 |
+
You can read the data using Python's `h5py` library.
|
| 96 |
+
|
| 97 |
+
```python
|
| 98 |
+
import h5py
|
| 99 |
+
import numpy as np
|
| 100 |
+
import cv2
|
| 101 |
+
import os
|
| 102 |
+
|
| 103 |
+
# Example path (adjust as needed)
|
| 104 |
+
file_path = "task_00001/train/episode_init_0.hdf5"
|
| 105 |
+
|
| 106 |
+
if not os.path.exists(file_path):
|
| 107 |
+
print(f"Please check path: {file_path}")
|
| 108 |
+
exit()
|
| 109 |
+
|
| 110 |
+
with h5py.File(file_path, 'r') as f:
|
| 111 |
+
# 1. Load low-dim data
|
| 112 |
+
qpos = f['/observations/qpos'][:]
|
| 113 |
+
action = f['/action'][:]
|
| 114 |
+
|
| 115 |
+
print(f"Loaded Episode with {qpos.shape[0]} frames.")
|
| 116 |
+
|
| 117 |
+
# 2. Load and Decode Images
|
| 118 |
+
# Access the specific camera dataset
|
| 119 |
+
camera_data = f['/observations/images/top']
|
| 120 |
+
|
| 121 |
+
# Read the first frame's compressed data
|
| 122 |
+
compressed_data = camera_data[0]
|
| 123 |
+
|
| 124 |
+
# Decode using OpenCV
|
| 125 |
+
# np.frombuffer converts the binary string to a 1D uint8 array
|
| 126 |
+
image = cv2.imdecode(np.frombuffer(compressed_data, np.uint8), cv2.IMREAD_COLOR)
|
| 127 |
+
|
| 128 |
+
print(f"Decoded Image Shape: {image.shape}") # Should be (480, 640, 3)
|
| 129 |
+
|
| 130 |
+
```
|
| 131 |
+
|
| 132 |
+
## Citation
|
| 133 |
+
|
| 134 |
+
```bibtex
|
| 135 |
+
@misc{wang2026greatmarch100100,
|
| 136 |
+
title={The Great March 100: 100 Detail-oriented Tasks for Evaluating Embodied AI Agents},
|
| 137 |
+
author={Ziyu Wang and Chenyuan Liu and Yushun Xiang and Runhao Zhang and Yu Zhang and Qingbo Hao and Hongliang Lu and Houyu Chen and Zhizhong Feng and Kaiyue Zheng and Dehao Ye and Xianchao Zeng and Xinyu Zhou and Boran Wen and Jiaxin Li and Mingyu Zhang and Kecheng Zheng and Qian Zhu and Ran Cheng and Yong-Lu Li},
|
| 138 |
+
year={2026},
|
| 139 |
+
eprint={2601.11421},
|
| 140 |
+
archivePrefix={arXiv},
|
| 141 |
+
primaryClass={cs.RO},
|
| 142 |
+
url={https://arxiv.org/abs/2601.11421},
|
| 143 |
+
}
|
| 144 |
+
```
|