Datasets:

Modalities:
Video
Size:
< 1K
Libraries:
Datasets
License:
aspiridonov commited on
Commit
a01f22a
·
verified ·
1 Parent(s): 483e7c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -123
README.md CHANGED
@@ -9,132 +9,50 @@ configs:
9
  data_files: data/*/*.parquet
10
  ---
11
 
12
- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
13
-
14
- ## Dataset Description
15
-
16
-
17
-
18
- - **Homepage:** [More Information Needed]
19
- - **Paper:** [More Information Needed]
20
- - **License:** apache-2.0
21
-
22
- ## Dataset Structure
23
-
24
- [meta/info.json](meta/info.json):
25
- ```json
26
- {
27
- "codebase_version": "v3.0",
28
- "robot_type": "glove",
29
- "total_episodes": 6,
30
- "total_frames": 7205,
31
- "total_tasks": 2,
32
- "chunks_size": 1000,
33
- "data_files_size_in_mb": 100,
34
- "video_files_size_in_mb": 200,
35
- "fps": 30,
36
- "splits": {
37
- "train": "0:6"
38
- },
39
- "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
40
- "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
41
- "features": {
42
- "observation.images.head_cam": {
43
- "dtype": "video",
44
- "shape": [
45
- 3,
46
- 720,
47
- 960
48
- ],
49
- "names": [
50
- "c",
51
- "h",
52
- "w"
53
- ],
54
- "info": {
55
- "video.height": 720,
56
- "video.width": 960,
57
- "video.codec": "av1",
58
- "video.pix_fmt": "yuv420p",
59
- "video.is_depth_map": false,
60
- "video.fps": 30,
61
- "video.channels": 3,
62
- "has_audio": false
63
- }
64
- },
65
- "observation.images.wrist_cam": {
66
- "dtype": "video",
67
- "shape": [
68
- 3,
69
- 480,
70
- 640
71
- ],
72
- "names": [
73
- "c",
74
- "h",
75
- "w"
76
- ],
77
- "info": {
78
- "video.height": 480,
79
- "video.width": 640,
80
- "video.codec": "av1",
81
- "video.pix_fmt": "yuv420p",
82
- "video.is_depth_map": false,
83
- "video.fps": 30,
84
- "video.channels": 3,
85
- "has_audio": false
86
- }
87
- },
88
- "observation.state": {
89
- "dtype": "float32",
90
- "shape": [
91
- 77
92
- ],
93
- "names": [
94
- "state"
95
- ]
96
- },
97
- "timestamp": {
98
- "dtype": "float32",
99
- "shape": [
100
- 1
101
- ],
102
- "names": null
103
- },
104
- "frame_index": {
105
- "dtype": "int64",
106
- "shape": [
107
- 1
108
- ],
109
- "names": null
110
- },
111
- "episode_index": {
112
- "dtype": "int64",
113
- "shape": [
114
- 1
115
- ],
116
- "names": null
117
- },
118
- "index": {
119
- "dtype": "int64",
120
- "shape": [
121
- 1
122
- ],
123
- "names": null
124
- },
125
- "task_index": {
126
- "dtype": "int64",
127
- "shape": [
128
- 1
129
- ],
130
- "names": null
131
- }
132
- }
133
- }
134
  ```
135
 
 
 
 
 
136
 
137
- ## Citation
138
 
139
  **BibTeX:**
140
 
 
9
  data_files: data/*/*.parquet
10
  ---
11
 
12
+ # April Robotics Data Sample
13
+ This dataset contains 3D ground truth hand and finger annotations combined with egocentric and wrist recordings of industrial assembly operations captured in an active manufacturing environment. The data showcases our multimodal capture system consisting of head and wrist-mounted cameras, and a sensorized glove that track high-quality and precise human hand motion and manipulation.
14
+
15
+ ## Dataset Summary
16
+ - **Total episodes:** 6
17
+ - **Frames:** 7205
18
+ - **Tasks:** 2 distinct industrial manipulation tasks
19
+ - **Modalities:** RGB, 3D Hand Keypoints, Wrist Pose, Head Pose, Language Annotations
20
+ - **Resolution and rate:** 960 x 720 (headcam), 480 x 640 (wristcam) @ 30 FPS
21
+ - **Format:** LeRobot Dataset v3
22
+
23
+ ## How to Use
24
+
25
+ **Quick Example:**
26
+ ```python
27
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
28
+
29
+ EPISODE_ID = 0
30
+ SAMPLE_ID = 10
31
+
32
+ dataset = LeRobotDataset(repo_id="aprilrobotics/sample", episodes=[EPISODE_ID])
33
+ frame = dataset[SAMPLE_ID]
34
+
35
+ # Camera images (H, W, 3)
36
+ head_img = frame["observation.images.head_cam"].numpy().transpose(1, 2, 0)
37
+ wrist_img = frame["observation.images.wrist_cam"].numpy().transpose(1, 2, 0)
38
+
39
+ # State: [head_pose(7), wrist_pose(7), hand_keypoints(63)]
40
+ # pose = [x, y, z, qx, qy, qz, qw]
41
+ state = frame["observation.state"].numpy()
42
+ head_pose = state[0:7] # position + quaternion (xyzw)
43
+ wrist_pose = state[7:14] # position + quaternion (xyzw)
44
+
45
+ # 21 hand keypoints:
46
+ # 0=wrist, 1-4=thumb, 5-8=index, 9-12=middle, 13-16=ring, 17-20=pinky
47
+ keypoints = state[14:].reshape(21, 3)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  ```
49
 
50
+ **Visualization:**
51
+ We provide a visualization in rerun for our data. Visit [https://github.com/AprilRoboticsAI/Visualization] and follow the instructions there.
52
+
53
+ # About April Robotics
54
 
55
+ We view humans as another embodiment of a humanoid robots and believe their behavior should be captured with the same fidelity as robotic data. By building wearables like sensorized gloves and using the data ourselves to train and deploy humanoids, we create a closed loop where real-world use continuously improves data quality. This tight integration ensures the data directly translates into better downstream robotic capabilities.
56
 
57
  **BibTeX:**
58