Raffael-Kultyshev commited on
Commit
501820a
·
verified ·
1 Parent(s): 781c6ba

Rewrite README: accurate description of dataset content, pipeline, and data format

Browse files
Files changed (1) hide show
  1. README.md +103 -67
README.md CHANGED
@@ -5,39 +5,59 @@ task_categories:
5
  tags:
6
  - lerobot
7
  - hand-pose
8
- - rgb-d
9
  - humanoid
10
  - manipulation
11
  - 6dof
12
  - mediapipe
13
  - egocentric
14
- - motion-semantics
15
  size_categories:
16
  - 10K<n<100K
17
  language:
18
  - en
19
- pretty_name: Dynamic Intelligence - Egocentric Human Motion Annotation Dataset
20
  ---
21
 
22
- # Dynamic Intelligence - Egocentric Human Motion Annotation Dataset
23
 
24
- RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for humanoid robot training. Includes 6-DoF hand pose trajectories, synchronized video, and semantic motion annotations.
 
 
25
 
26
  ---
27
 
28
- ## Dataset Overview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
- | Metric | Value |
31
- |--------|-------|
32
- | Episodes | 145 |
33
- | Total Frames | ~59,000 |
34
- | FPS | 30 |
35
- | Tasks | 12 manipulation tasks |
36
 
37
- ### Task Distribution
 
 
 
 
 
 
 
 
38
 
39
- | Task | Description | Episodes | Count |
40
- |------|-------------|----------|-------|
 
 
41
  | 1 | Fold the t-shirt on the bed. | 0–7 | 8 |
42
  | 2 | Pick up the two items on the floor and put them on the bed. | 8–17 | 10 |
43
  | 3 | Fold the jeans on the bed. | 18–27 | 10 |
@@ -53,54 +73,65 @@ RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for hum
53
 
54
  ---
55
 
56
- ## Repository Structure
57
 
58
- ```
59
- humanoid-robots-training-dataset/
60
-
61
- ├── data/
62
- │ ├── chunk-000/ # Parquet files (episodes 0-99)
63
- │ └── chunk-001/ # Parquet files (episodes 100-144)
64
-
65
- ├── videos/
66
- │ ├── chunk-000/rgb/ # MP4 videos (episodes 0-99)
67
- │ └── chunk-001/rgb/ # MP4 videos (episodes 100-144)
68
-
69
- ├── meta/
70
- │ ├── info.json # Dataset configuration (LeRobot format)
71
- │ ├── stats.json # Feature statistics
72
- │ ├── events.json # Disturbance & recovery annotations
73
- │ └── annotations_motion_v1_frames.json
74
-
75
- └── README.md
76
- ```
77
 
78
- ---
 
 
 
 
 
 
 
 
79
 
80
- ## Data Schema
 
 
 
 
81
 
82
- ### Parquet Columns (per frame)
83
 
84
  | Column | Type | Description |
85
  |--------|------|-------------|
86
- | `episode_index` | int64 | Episode number (0–144) |
87
- | `frame_index` | int64 | Frame within episode |
88
- | `timestamp` | float64 | Time in seconds |
89
- | `language_instruction` | string | Task description |
90
- | `observation.camera_pose` | float[6] | Camera 6-DoF (x, y, z, roll, pitch, yaw) |
91
- | `observation.left_hand` | float[9] | Left hand keypoints |
92
- | `observation.right_hand` | float[9] | Right hand keypoints |
93
- | `action.camera_delta` | float[6] | Camera delta 6-DoF |
94
- | `action.left_hand_delta` | float[9] | Left hand delta keypoints |
95
- | `action.right_hand_delta` | float[9] | Right hand delta keypoints |
96
-
97
- ### Coordinate System
98
- - Origin: Camera (iPhone TrueDepth)
99
- - X: Right, Y: Down, Z: Forward (into scene)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
 
101
  ---
102
 
103
- ## Usage
104
 
105
  ### With LeRobot
106
 
@@ -109,12 +140,13 @@ from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
109
 
110
  dataset = LeRobotDataset("DynamicIntelligence/humanoid-robots-training-dataset")
111
 
112
- episode = dataset[0]
113
- state = episode["observation.camera_pose"] # [6] camera 6-DoF
114
- task = episode["language_instruction"] # "Fold the t-shirt on the bed."
 
115
  ```
116
 
117
- ### Direct Parquet Access
118
 
119
  ```python
120
  import pandas as pd
@@ -123,20 +155,30 @@ from huggingface_hub import hf_hub_download
123
  path = hf_hub_download(
124
  repo_id="DynamicIntelligence/humanoid-robots-training-dataset",
125
  filename="data/chunk-000/episode_000000.parquet",
126
- repo_type="dataset"
127
  )
128
  df = pd.read_parquet(path)
129
- print(df["language_instruction"].iloc[0])
 
130
  ```
131
 
132
  ---
133
 
 
 
 
 
 
 
 
 
 
134
  ## Citation
135
 
136
  ```bibtex
137
  @dataset{dynamic_intelligence_2025,
138
  author = {Dynamic Intelligence},
139
- title = {Egocentric Human Motion Annotation Dataset},
140
  year = {2025},
141
  publisher = {Hugging Face},
142
  url = {https://huggingface.co/datasets/DynamicIntelligence/humanoid-robots-training-dataset}
@@ -147,11 +189,5 @@ print(df["language_instruction"].iloc[0])
147
 
148
  ## Contact
149
 
150
- **Email:** shayan@dynamicintelligence.company
151
  **Organization:** [Dynamic Intelligence](https://dynamicintelligence.company)
152
-
153
- ---
154
-
155
- ## Visualizer
156
-
157
- Explore the dataset interactively: [DI Hand Pose Sample Dataset Viewer](https://huggingface.co/spaces/DynamicIntelligence/dynamic_intelligence_sample_data)
 
5
  tags:
6
  - lerobot
7
  - hand-pose
 
8
  - humanoid
9
  - manipulation
10
  - 6dof
11
  - mediapipe
12
  - egocentric
13
+ - imitation-learning
14
  size_categories:
15
  - 10K<n<100K
16
  language:
17
  - en
18
+ pretty_name: Dynamic Intelligence - Humanoid Robot Training Dataset
19
  ---
20
 
21
+ # Dynamic Intelligence Humanoid Robot Training Dataset
22
 
23
+ A first-person (egocentric) video dataset of human hand manipulation, designed for training humanoid robot policies via imitation learning. Each episode captures a person performing an everyday household task — folding clothes, moving dishes, opening doors filmed from a head-mounted iPhone using its built-in LiDAR and depth sensors.
24
+
25
+ The dataset pairs each video with frame-level 3D hand tracking and camera pose data, giving learning algorithms both the visual input and the corresponding spatial trajectories they need to reproduce the demonstrated behavior on a robot.
26
 
27
  ---
28
 
29
+ ## How it works
30
+
31
+ **Recording setup.** A person wears an iPhone 13 Pro on their head (using a head mount). The phone runs the [Record3D](https://record3d.app/) app, which simultaneously captures:
32
+ - RGB video at 30 FPS
33
+ - Depth maps via the LiDAR sensor
34
+ - 6-DoF camera pose from ARKit (position + orientation of the phone in the room)
35
+
36
+ **Processing pipeline.** After recording, each episode goes through an offline pipeline:
37
+ 1. **Hand detection** — [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker) detects 2D hand landmarks in every RGB frame
38
+ 2. **3D reconstruction** — The 2D landmarks are projected into 3D space using the corresponding depth map, producing real-world XYZ positions (in cm) relative to the camera
39
+ 3. **Action computation** — Frame-to-frame deltas are computed for both the camera and hand positions, representing the "actions" a robot would need to take
40
+
41
+ **Result.** Each episode contains a synchronized video and a parquet file with per-frame 3D observations and actions, formatted for the [LeRobot](https://github.com/huggingface/lerobot) framework.
42
+
43
+ ---
44
 
45
+ ## Dataset overview
 
 
 
 
 
46
 
47
+ | | |
48
+ |---|---|
49
+ | **Episodes** | 145 |
50
+ | **Total data frames** | ~59,000 |
51
+ | **Video FPS** | 30 |
52
+ | **Tasks** | 12 household manipulation tasks |
53
+ | **Format** | [LeRobot v2.0](https://github.com/huggingface/lerobot) |
54
+ | **Sensor** | iPhone 13 Pro (RGB + LiDAR + ARKit) |
55
+ | **Perspective** | Egocentric (head-mounted) |
56
 
57
+ ### Tasks
58
+
59
+ | # | Task instruction | Episodes | Count |
60
+ |---|------------------|----------|-------|
61
  | 1 | Fold the t-shirt on the bed. | 0–7 | 8 |
62
  | 2 | Pick up the two items on the floor and put them on the bed. | 8–17 | 10 |
63
  | 3 | Fold the jeans on the bed. | 18–27 | 10 |
 
73
 
74
  ---
75
 
76
+ ## What's in the data
77
 
78
+ Each episode has two files: a **video** (`.mp4`) and a **parquet** table with one row per tracked frame.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
 
80
+ ### Observations (what the robot sees)
81
+
82
+ | Column | Shape | Unit | Description |
83
+ |--------|-------|------|-------------|
84
+ | `observation.camera_pose` | float[6] | cm, degrees | Position (x, y, z) and orientation (roll, pitch, yaw) of the head-mounted camera in the room. Comes from ARKit's visual-inertial odometry. |
85
+ | `observation.left_hand` | float[9] | cm | 3D positions of 3 keypoints on the left hand: wrist, thumb tip, and index fingertip (x, y, z each). |
86
+ | `observation.right_hand` | float[9] | cm | 3D positions of 3 keypoints on the right hand: wrist, index fingertip, and middle fingertip (x, y, z each). |
87
+
88
+ ### Actions (what the robot should do)
89
 
90
+ | Column | Shape | Description |
91
+ |--------|-------|-------------|
92
+ | `action.camera_delta` | float[6] | Frame-to-frame change in camera pose (dx, dy, dz, droll, dpitch, dyaw). Represents head movement. |
93
+ | `action.left_hand_delta` | float[9] | Frame-to-frame change in left hand keypoint positions. |
94
+ | `action.right_hand_delta` | float[9] | Frame-to-frame change in right hand keypoint positions. |
95
 
96
+ ### Metadata columns
97
 
98
  | Column | Type | Description |
99
  |--------|------|-------------|
100
+ | `episode_index` | int | Which episode (0–144) |
101
+ | `frame_index` | int | Frame number within the episode |
102
+ | `timestamp` | float | Time in seconds from episode start |
103
+ | `language_instruction` | string | Natural language task description (same for all frames in an episode) |
104
+ | `next.done` | bool | Whether this is the last frame of the episode |
105
+
106
+ ### Coordinate system
107
+
108
+ All 3D positions are relative to the camera:
109
+ - **X** right
110
+ - **Y** → down
111
+ - **Z** → forward (into the scene)
112
+
113
+ Hand values of `[0, 0, 0]` mean the hand was not detected in that frame (e.g. out of view or occluded).
114
+
115
+ ---
116
+
117
+ ## File structure
118
+
119
+ ```
120
+ ├── data/
121
+ │ ├── chunk-000/ # Parquet files for episodes 0–99
122
+ │ └── chunk-001/ # Parquet files for episodes 100–144
123
+ ├── videos/
124
+ │ ├── chunk-000/rgb/ # MP4 videos for episodes 0–99
125
+ │ └── chunk-001/rgb/ # MP4 videos for episodes 100–144
126
+ ├── meta/
127
+ │ ├── info.json # LeRobot dataset config
128
+ │ └── stats.json # Column statistics (min/max/mean/std)
129
+ └── README.md
130
+ ```
131
 
132
  ---
133
 
134
+ ## Quick start
135
 
136
  ### With LeRobot
137
 
 
140
 
141
  dataset = LeRobotDataset("DynamicIntelligence/humanoid-robots-training-dataset")
142
 
143
+ sample = dataset[0]
144
+ print(sample["language_instruction"]) # "Fold the t-shirt on the bed."
145
+ print(sample["observation.camera_pose"]) # tensor of shape [6]
146
+ print(sample["action.left_hand_delta"]) # tensor of shape [9]
147
  ```
148
 
149
+ ### Direct download
150
 
151
  ```python
152
  import pandas as pd
 
155
  path = hf_hub_download(
156
  repo_id="DynamicIntelligence/humanoid-robots-training-dataset",
157
  filename="data/chunk-000/episode_000000.parquet",
158
+ repo_type="dataset",
159
  )
160
  df = pd.read_parquet(path)
161
+ print(f"{len(df)} frames")
162
+ print(df[["timestamp", "observation.camera_pose", "language_instruction"]].head())
163
  ```
164
 
165
  ---
166
 
167
+ ## Visualizer
168
+
169
+ Browse episodes interactively:
170
+ **[DI Hand Pose Sample Dataset Viewer](https://huggingface.co/spaces/DynamicIntelligence/dynamic_intelligence_sample_data)**
171
+
172
+ The viewer shows the egocentric video alongside time-series plots of camera pose and hand positions, so you can see exactly what the person was doing and how the tracking data aligns with the video.
173
+
174
+ ---
175
+
176
  ## Citation
177
 
178
  ```bibtex
179
  @dataset{dynamic_intelligence_2025,
180
  author = {Dynamic Intelligence},
181
+ title = {Humanoid Robot Training Dataset: Egocentric Hand Manipulation Demonstrations},
182
  year = {2025},
183
  publisher = {Hugging Face},
184
  url = {https://huggingface.co/datasets/DynamicIntelligence/humanoid-robots-training-dataset}
 
189
 
190
  ## Contact
191
 
 
192
  **Organization:** [Dynamic Intelligence](https://dynamicintelligence.company)
193
+ **Email:** shayan@dynamicintelligence.company