Raffael-Kultyshev commited on
Commit
8b3adab
·
verified ·
1 Parent(s): c85ef24

Update README with motion annotations docs, usage examples, and known limitations

Browse files
Files changed (1) hide show
  1. README.md +196 -38
README.md CHANGED
@@ -10,16 +10,20 @@ tags:
10
  - manipulation
11
  - 6dof
12
  - mediapipe
 
 
13
  size_categories:
14
  - 10K<n<100K
15
  language:
16
  - en
17
- pretty_name: Dynamic Intelligence Humanoid Robots Training Dataset
18
  ---
19
 
20
- # Dynamic Intelligence - Humanoid Robots Training Dataset
21
 
22
- RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for humanoid robot training.
 
 
23
 
24
  ## 📊 Dataset Overview
25
 
@@ -28,47 +32,130 @@ RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for hum
28
  | Episodes | 97 |
29
  | Total Frames | ~28,000 |
30
  | FPS | 30 |
31
- | Tasks | 10 manipulation tasks (folding, picking, placing) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  ---
34
 
35
- ## 📁 Structure
36
 
37
  ```
38
- data/
39
- ├── data/chunk-000/ # Parquet files per episode
40
- ├── episode_000000.parquet
41
- ── episode_000001.parquet
42
- ── ...
43
- ├── videos/chunk-000/rgb/ # MP4 videos per episode
44
- ── episode_000000.mp4
45
- └── ...
46
- ── meta/
47
- ── info.json # Dataset metadata
48
- ├── stats.json # Feature statistics
49
- ── events.json # Disturbance/recovery annotations
50
- └── depth_quality_summary.json # Depth QC metrics
 
 
 
 
 
 
 
 
51
  ```
52
 
53
  ---
54
 
55
- ## 🎯 Parquet Schema (per episode)
 
 
56
 
57
  | Column | Type | Description |
58
  |--------|------|-------------|
59
  | `episode_index` | int64 | Episode number (0-96) |
60
  | `frame_index` | int64 | Frame within episode |
61
  | `timestamp` | float64 | Time in seconds |
62
- | `language_instruction` | string | Task description (e.g., "Fold the white t-shirt") |
63
- | `observation.state` | float[90] | 15 hand joints × 6 DoF (x,y,z in cm + yaw,pitch,roll in deg) |
64
- | `action` | float[90] | Same as state (for imitation learning) |
65
  | `observation.images.rgb` | struct | Video path + timestamp |
66
 
 
 
 
 
 
 
 
 
 
 
67
  ---
68
 
69
- ## 📋 Events Metadata (`meta/events.json`)
 
 
70
 
71
- Annotated disturbances and recovery actions for select episodes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
  ### Disturbance Types
74
  | Type | Description |
@@ -89,29 +176,33 @@ Annotated disturbances and recovery actions for select episodes.
89
 
90
  ---
91
 
92
- ## 📈 Depth Quality Metrics (`meta/depth_quality_summary.json`)
93
 
94
- Quality control metrics for each episode's depth data.
95
 
96
  | Metric | Description | Dataset Average |
97
  |--------|-------------|-----------------|
98
- | `valid_depth_pct` | % of frames with valid depth at hand location | 95.5% ✅ |
99
- | `plane_rms_mm` | RMS deviation from flat surface (mm) | 5.73mm ✅ |
100
 
101
  ---
102
 
103
  ## 🔧 Capture Setup
104
 
105
- - **Device:** iPhone 13 Pro (TrueDepth front camera)
106
- - **RGB Resolution:** 640×480 @ 30fps
107
- - **Depth Resolution:** 640×480 (synchronized with RGB)
108
- - **Depth Format:** Meters (float32)
109
- - **Hand Tracking:** MediaPipe (21 landmarks → 15 selected joints)
110
- - **6-DoF Computation:** Back-projection + vector kinematics
 
 
111
 
112
  ---
113
 
114
- ## 🚀 Usage with LeRobot
 
 
115
 
116
  ```python
117
  from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
@@ -120,13 +211,80 @@ dataset = LeRobotDataset("DynamicIntelligence/humanoid-robots-training-dataset")
120
 
121
  # Access episode
122
  episode = dataset[0]
123
- state = episode["observation.state"] # [90] hand pose
124
- rgb = episode["observation.images.rgb"] # Video frame
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
125
  ```
126
 
127
  ---
128
 
129
  ## 📧 Contact
130
 
131
- **Organization:** [Dynamic Intelligence](https://huggingface.co/DynamicIntelligence)
132
 
 
 
10
  - manipulation
11
  - 6dof
12
  - mediapipe
13
+ - egocentric
14
+ - motion-semantics
15
  size_categories:
16
  - 10K<n<100K
17
  language:
18
  - en
19
+ pretty_name: Dynamic Intelligence - Egocentric Human Motion Annotation Dataset
20
  ---
21
 
22
+ # Dynamic Intelligence - Egocentric Human Motion Annotation Dataset
23
 
24
+ RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for humanoid robot training. Includes 6-DoF hand pose trajectories, synchronized video, and semantic motion annotations.
25
+
26
+ ---
27
 
28
  ## 📊 Dataset Overview
29
 
 
32
  | Episodes | 97 |
33
  | Total Frames | ~28,000 |
34
  | FPS | 30 |
35
+ | Tasks | 10 manipulation tasks |
36
+ | Total Duration | ~15.5 minutes |
37
+ | Avg Episode Length | ~9.6 seconds |
38
+
39
+ ### Task Distribution
40
+
41
+ | Task ID | Description | Episodes |
42
+ |---------|-------------|----------|
43
+ | Task 1 | Fold the white t-shirt on the bed | 8 |
44
+ | Task 2 | Fold the jeans on the bed | 10 |
45
+ | Task 3 | Fold two underwear and stack them | 10 |
46
+ | Task 4 | Put the pillow on the right place | 10 |
47
+ | Task 5 | Pick up plate and glass, put on stove | 10 |
48
+ | Task 6 | Go out the door and close it | 9 |
49
+ | Task 7 | Pick up sandals, put next to scale | 10 |
50
+ | Task 8 | Put cloth in basket, close drawer | 10 |
51
+ | Task 9 | Screw the cap on your bottle | 10 |
52
+ | Task 10 | Pick up two objects, put on bed | 10 |
53
 
54
  ---
55
 
56
+ ## 📁 Repository Structure
57
 
58
  ```
59
+ humanoid-robots-training-dataset/
60
+
61
+ ├── data/
62
+ ── chunk-000/ # Parquet files (97 episodes)
63
+ ── episode_000000.parquet
64
+ ├── episode_000001.parquet
65
+ ── ...
66
+
67
+ ── videos/
68
+ │ └── chunk-000/rgb/ # MP4 videos (synchronized)
69
+ ├── episode_000000.mp4
70
+ │ └── ...
71
+
72
+ ├── meta/ # Metadata & Annotations
73
+ │ ├── info.json # Dataset configuration (LeRobot format)
74
+ │ ├── stats.json # Feature min/max/mean/std statistics
75
+ │ ├── events.json # Disturbance & recovery annotations
76
+ │ ├── depth_quality_summary.json # Per-episode depth QC metrics
77
+ │ └── annotations_motion_v1_frames.json # Motion semantics annotations
78
+
79
+ └── README.md
80
  ```
81
 
82
  ---
83
 
84
+ ## 🎯 Data Schema
85
+
86
+ ### Parquet Columns (per frame)
87
 
88
  | Column | Type | Description |
89
  |--------|------|-------------|
90
  | `episode_index` | int64 | Episode number (0-96) |
91
  | `frame_index` | int64 | Frame within episode |
92
  | `timestamp` | float64 | Time in seconds |
93
+ | `language_instruction` | string | Task description |
94
+ | `observation.state` | float[252] | 21 hand joints × 2 hands × 6 DoF |
95
+ | `action` | float[252] | Same as state (for imitation learning) |
96
  | `observation.images.rgb` | struct | Video path + timestamp |
97
 
98
+ ### 6-DoF Hand Pose Format
99
+
100
+ Each joint has 6 values: `[x_cm, y_cm, z_cm, yaw_deg, pitch_deg, roll_deg]`
101
+
102
+ **Coordinate System:**
103
+ - Origin: Camera (iPhone TrueDepth)
104
+ - X: Right (positive)
105
+ - Y: Down (positive)
106
+ - Z: Forward (positive, into scene)
107
+
108
  ---
109
 
110
+ ## 🏷️ Motion Semantics Annotations
111
+
112
+ **File:** `meta/annotations_motion_v1_frames.json`
113
 
114
+ Coarse temporal segmentation with motion intent, phase, and error labels.
115
+
116
+ ### Annotation Schema
117
+
118
+ ```json
119
+ {
120
+ "episode_id": "Task1_Vid2",
121
+ "segments": [
122
+ {
123
+ "start_frame": 54,
124
+ "end_frame_exclusive": 140,
125
+ "motion_type": "grasp", // What action is being performed
126
+ "temporal_phase": "start", // start | contact | manipulate | end
127
+ "actor": "both_hands", // left_hand | right_hand | both_hands
128
+ "target": {
129
+ "type": "cloth_region", // cloth_region | object | surface
130
+ "value": "bottom_edge" // Specific target identifier
131
+ },
132
+ "state": {
133
+ "stage": "unfolded", // Task-specific state
134
+ "flatness": "wrinkled", // For folding tasks only
135
+ "symmetry": "asymmetric" // For folding tasks only
136
+ },
137
+ "error": "none" // misalignment | slip | drop | none
138
+ }
139
+ ]
140
+ }
141
+ ```
142
+
143
+ ### Motion Types
144
+ `grasp` | `pull` | `align` | `fold` | `smooth` | `insert` | `rotate` | `open` | `close` | `press` | `hold` | `release` | `place`
145
+
146
+ ### Why Motion Annotations?
147
+ - **Temporal Structure**: Know when manipulation phases begin/end
148
+ - **Intent Understanding**: What the human intends to do, not just kinematics
149
+ - **Error Detection**: Labeled failure modes (slip, drop, misalignment)
150
+ - **Training Signal**: Richer supervision for imitation learning
151
+
152
+ ---
153
+
154
+ ## 📋 Events Metadata
155
+
156
+ **File:** `meta/events.json`
157
+
158
+ Disturbances and recovery actions for select episodes.
159
 
160
  ### Disturbance Types
161
  | Type | Description |
 
176
 
177
  ---
178
 
179
+ ## 📈 Depth Quality Metrics
180
 
181
+ **File:** `meta/depth_quality_summary.json`
182
 
183
  | Metric | Description | Dataset Average |
184
  |--------|-------------|-----------------|
185
+ | `valid_depth_pct` | % frames with valid depth at hand | 95.5% ✅ |
186
+ | `plane_rms_mm` | RMS deviation from flat surface | 5.73mm ✅ |
187
 
188
  ---
189
 
190
  ## 🔧 Capture Setup
191
 
192
+ | Parameter | Value |
193
+ |-----------|-------|
194
+ | **Device** | iPhone 13 Pro (TrueDepth front camera) |
195
+ | **RGB Resolution** | 640×480 @ 30fps |
196
+ | **Depth Resolution** | 640×480 (synchronized) |
197
+ | **Depth Format** | Meters (float32) |
198
+ | **Hand Tracking** | MediaPipe (21 landmarks per hand) |
199
+ | **6-DoF Computation** | Back-projection + vector kinematics |
200
 
201
  ---
202
 
203
+ ## 🚀 Usage
204
+
205
+ ### With LeRobot
206
 
207
  ```python
208
  from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
 
211
 
212
  # Access episode
213
  episode = dataset[0]
214
+ state = episode["observation.state"] # [252] hand pose (both hands)
215
+ rgb = episode["observation.images.rgb"] # Video frame
216
+ task = episode["language_instruction"] # Task description
217
+ ```
218
+
219
+ ### Loading Motion Annotations
220
+
221
+ ```python
222
+ import json
223
+ from huggingface_hub import hf_hub_download
224
+
225
+ # Download annotations
226
+ path = hf_hub_download(
227
+ repo_id="DynamicIntelligence/humanoid-robots-training-dataset",
228
+ filename="meta/annotations_motion_v1_frames.json",
229
+ repo_type="dataset"
230
+ )
231
+
232
+ with open(path) as f:
233
+ annotations = json.load(f)
234
+
235
+ # Get segments for Task1
236
+ task1_episodes = annotations["tasks"]["Task1"]["episodes"]
237
+ for ep in task1_episodes:
238
+ print(f"{ep['episode_id']}: {len(ep['segments'])} segments")
239
+ ```
240
+
241
+ ### Combining Pose + Annotations
242
+
243
+ ```python
244
+ # Get frame-level motion labels
245
+ def get_motion_label(frame_idx, segments):
246
+ for seg in segments:
247
+ if seg["start_frame"] <= frame_idx < seg["end_frame_exclusive"]:
248
+ return seg["motion_type"], seg["temporal_phase"]
249
+ return None, None
250
+
251
+ # Example: label each frame
252
+ for frame_idx in range(episode["frame_index"].max()):
253
+ motion, phase = get_motion_label(frame_idx, episode_annotations["segments"])
254
+ if motion:
255
+ print(f"Frame {frame_idx}: {motion} ({phase})")
256
+ ```
257
+
258
+ ---
259
+
260
+ ## ⚠️ Known Limitations
261
+
262
+ 1. **Depth Dropouts**: Some frames have invalid depth in low-light or high-motion scenarios (~4.5% of frames)
263
+ 2. **Hand Tracking Accuracy**: MediaPipe may lose tracking during fast movements or self-occlusion
264
+ 3. **Single Viewpoint**: Egocentric only - no multi-view coverage
265
+ 4. **Limited Object Diversity**: Same objects used across episodes within each task
266
+ 5. **Annotation Coverage**: Motion annotations cover primary manipulation phases; transitional frames may be unlabeled
267
+
268
+ ---
269
+
270
+ ## 📖 Citation
271
+
272
+ If you use this dataset in your research, please cite:
273
+
274
+ ```bibtex
275
+ @dataset{dynamic_intelligence_2024,
276
+ author = {Dynamic Intelligence},
277
+ title = {Egocentric Human Motion Annotation Dataset},
278
+ year = {2024},
279
+ publisher = {Hugging Face},
280
+ url = {https://huggingface.co/datasets/DynamicIntelligence/humanoid-robots-training-dataset}
281
+ }
282
  ```
283
 
284
  ---
285
 
286
  ## 📧 Contact
287
 
288
+ **Organization:** [Dynamic Intelligence](https://dynamicintelligence.company)
289
 
290
+ For questions or collaboration inquiries, please open a Discussion on this repository.