Update README.md
Browse files
README.md
CHANGED
|
@@ -9,132 +9,50 @@ configs:
|
|
| 9 |
data_files: data/*/*.parquet
|
| 10 |
---
|
| 11 |
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
- **
|
| 19 |
-
- **
|
| 20 |
-
- **
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
],
|
| 49 |
-
"names": [
|
| 50 |
-
"c",
|
| 51 |
-
"h",
|
| 52 |
-
"w"
|
| 53 |
-
],
|
| 54 |
-
"info": {
|
| 55 |
-
"video.height": 720,
|
| 56 |
-
"video.width": 960,
|
| 57 |
-
"video.codec": "av1",
|
| 58 |
-
"video.pix_fmt": "yuv420p",
|
| 59 |
-
"video.is_depth_map": false,
|
| 60 |
-
"video.fps": 30,
|
| 61 |
-
"video.channels": 3,
|
| 62 |
-
"has_audio": false
|
| 63 |
-
}
|
| 64 |
-
},
|
| 65 |
-
"observation.images.wrist_cam": {
|
| 66 |
-
"dtype": "video",
|
| 67 |
-
"shape": [
|
| 68 |
-
3,
|
| 69 |
-
480,
|
| 70 |
-
640
|
| 71 |
-
],
|
| 72 |
-
"names": [
|
| 73 |
-
"c",
|
| 74 |
-
"h",
|
| 75 |
-
"w"
|
| 76 |
-
],
|
| 77 |
-
"info": {
|
| 78 |
-
"video.height": 480,
|
| 79 |
-
"video.width": 640,
|
| 80 |
-
"video.codec": "av1",
|
| 81 |
-
"video.pix_fmt": "yuv420p",
|
| 82 |
-
"video.is_depth_map": false,
|
| 83 |
-
"video.fps": 30,
|
| 84 |
-
"video.channels": 3,
|
| 85 |
-
"has_audio": false
|
| 86 |
-
}
|
| 87 |
-
},
|
| 88 |
-
"observation.state": {
|
| 89 |
-
"dtype": "float32",
|
| 90 |
-
"shape": [
|
| 91 |
-
77
|
| 92 |
-
],
|
| 93 |
-
"names": [
|
| 94 |
-
"state"
|
| 95 |
-
]
|
| 96 |
-
},
|
| 97 |
-
"timestamp": {
|
| 98 |
-
"dtype": "float32",
|
| 99 |
-
"shape": [
|
| 100 |
-
1
|
| 101 |
-
],
|
| 102 |
-
"names": null
|
| 103 |
-
},
|
| 104 |
-
"frame_index": {
|
| 105 |
-
"dtype": "int64",
|
| 106 |
-
"shape": [
|
| 107 |
-
1
|
| 108 |
-
],
|
| 109 |
-
"names": null
|
| 110 |
-
},
|
| 111 |
-
"episode_index": {
|
| 112 |
-
"dtype": "int64",
|
| 113 |
-
"shape": [
|
| 114 |
-
1
|
| 115 |
-
],
|
| 116 |
-
"names": null
|
| 117 |
-
},
|
| 118 |
-
"index": {
|
| 119 |
-
"dtype": "int64",
|
| 120 |
-
"shape": [
|
| 121 |
-
1
|
| 122 |
-
],
|
| 123 |
-
"names": null
|
| 124 |
-
},
|
| 125 |
-
"task_index": {
|
| 126 |
-
"dtype": "int64",
|
| 127 |
-
"shape": [
|
| 128 |
-
1
|
| 129 |
-
],
|
| 130 |
-
"names": null
|
| 131 |
-
}
|
| 132 |
-
}
|
| 133 |
-
}
|
| 134 |
```
|
| 135 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 136 |
|
| 137 |
-
|
| 138 |
|
| 139 |
**BibTeX:**
|
| 140 |
|
|
|
|
| 9 |
data_files: data/*/*.parquet
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# April Robotics Data Sample
|
| 13 |
+
This dataset contains 3D ground truth hand and finger annotations combined with egocentric and wrist recordings of industrial assembly operations captured in an active manufacturing environment. The data showcases our multimodal capture system consisting of head and wrist-mounted cameras, and a sensorized glove that track high-quality and precise human hand motion and manipulation.
|
| 14 |
+
|
| 15 |
+
## Dataset Summary
|
| 16 |
+
- **Total episodes:** 6
|
| 17 |
+
- **Frames:** 7205
|
| 18 |
+
- **Tasks:** 2 distinct industrial manipulation tasks
|
| 19 |
+
- **Modalities:** RGB, 3D Hand Keypoints, Wrist Pose, Head Pose, Language Annotations
|
| 20 |
+
- **Resolution and rate:** 960 x 720 (headcam), 480 x 640 (wristcam) @ 30 FPS
|
| 21 |
+
- **Format:** LeRobot Dataset v3
|
| 22 |
+
|
| 23 |
+
## How to Use
|
| 24 |
+
|
| 25 |
+
**Quick Example:**
|
| 26 |
+
```python
|
| 27 |
+
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
| 28 |
+
|
| 29 |
+
EPISODE_ID = 0
|
| 30 |
+
SAMPLE_ID = 10
|
| 31 |
+
|
| 32 |
+
dataset = LeRobotDataset(repo_id="aprilrobotics/sample", episodes=[EPISODE_ID])
|
| 33 |
+
frame = dataset[SAMPLE_ID]
|
| 34 |
+
|
| 35 |
+
# Camera images (H, W, 3)
|
| 36 |
+
head_img = frame["observation.images.head_cam"].numpy().transpose(1, 2, 0)
|
| 37 |
+
wrist_img = frame["observation.images.wrist_cam"].numpy().transpose(1, 2, 0)
|
| 38 |
+
|
| 39 |
+
# State: [head_pose(7), wrist_pose(7), hand_keypoints(63)]
|
| 40 |
+
# pose = [x, y, z, qx, qy, qz, qw]
|
| 41 |
+
state = frame["observation.state"].numpy()
|
| 42 |
+
head_pose = state[0:7] # position + quaternion (xyzw)
|
| 43 |
+
wrist_pose = state[7:14] # position + quaternion (xyzw)
|
| 44 |
+
|
| 45 |
+
# 21 hand keypoints:
|
| 46 |
+
# 0=wrist, 1-4=thumb, 5-8=index, 9-12=middle, 13-16=ring, 17-20=pinky
|
| 47 |
+
keypoints = state[14:].reshape(21, 3)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
```
|
| 49 |
|
| 50 |
+
**Visualization:**
|
| 51 |
+
We provide a visualization in rerun for our data. Visit [https://github.com/AprilRoboticsAI/Visualization] and follow the instructions there.
|
| 52 |
+
|
| 53 |
+
# About April Robotics
|
| 54 |
|
| 55 |
+
We view humans as another embodiment of a humanoid robots and believe their behavior should be captured with the same fidelity as robotic data. By building wearables like sensorized gloves and using the data ourselves to train and deploy humanoids, we create a closed loop where real-world use continuously improves data quality. This tight integration ensures the data directly translates into better downstream robotic capabilities.
|
| 56 |
|
| 57 |
**BibTeX:**
|
| 58 |
|