Jade-OOJU commited on
Commit
6c562ce
·
verified ·
1 Parent(s): 9cad01f

Re-upload: FLU coordinates, headset (center-eye) pose and position, joint positions in FLU

Browse files
Files changed (4) hide show
  1. README.md +145 -145
  2. data/chunk_0000.parquet +2 -2
  3. data/chunk_0001.parquet +2 -2
  4. meta/info.json +34 -10
README.md CHANGED
@@ -1,145 +1,145 @@
1
- ---
2
- license: apache-2.0
3
- task_categories:
4
- - robotics
5
- tags:
6
- - lerobot
7
- - robotics
8
- - mixed-reality
9
- - bimanual
10
- - kitchen
11
- - manipulation
12
- - imitation-learning
13
- - demonstration
14
- configs:
15
- - config_name: default
16
- data_files:
17
- - split: train
18
- path: data/chunk_*.parquet
19
- pretty_name: 'WILD-Mani: Kitchen Edition'
20
- ---
21
-
22
- # WILD-Mani: A Real-World Bimanual Manipulation Dataset – Kitchen Edition
23
- The first installment of the WILD-Mani series.
24
- * High-fidelity bimanual demonstrations collected in real-world kitchen environments using Mixed Reality (MR).
25
- * Designed to support Sim2Real transfer for bimanual manipulation tasks.
26
- * Current Domain: Kitchen
27
-
28
- ## Dataset Description
29
-
30
- This dataset contains human demonstrations of kitchen manipulation tasks captured using a Mixed Reality headset in **real-world kitchen environments**. The data tracks both hands simultaneously with 6-DOF pose tracking, enabling bimanual robot imitation learning research.
31
-
32
- ### Key Features
33
-
34
- - **Real-world environments**: Demonstrations captured in actual kitchens (not simulation)
35
- - **Mixed Reality capture**: High-fidelity hand tracking in physical spaces
36
- - **Bimanual manipulation**: Both hands tracked simultaneously
37
- - **Camera view**: Single camera (center eye) per demonstration
38
- - **Diverse tasks**: Multiple kitchen manipulation tasks with varying complexity
39
-
40
- ## Dataset Statistics
41
-
42
- | Property | Value |
43
- |----------|-------|
44
- | **Episodes** | 102 |
45
- | **Total Frames** | 79,244 |
46
- | **FPS** | 30 |
47
- | **Cameras** | 1 |
48
- | **Environment** | Real kitchen with lighting variations |
49
-
50
- ## Task Categories
51
-
52
- | Category | Task | Objects | Actions | Demos |
53
- |----------|------|---------|---------|-------|
54
- | Meal Prep | Get item from fridge | Bottle, container | Open, pick, close, place | 24 |
55
- | Dish Organizing | Place cups on shelf | Cups (2-3) | Pick, place | 21 |
56
- | Dish Organizing | Put utensils in drawer | Fork, spoon, knife | Pick, open, place, close | 20 |
57
- | Meal Prep | Set table (plate + utensil) | Plate, fork, knife | Pick, place | 21 |
58
-
59
- ## Diversity Dimensions
60
-
61
- | Dimension | Coverage | Details |
62
- |-----------|----------|---------|
63
- | **Actions** | 4+ types | Pick, place, open, close |
64
- | **Objects** | 15+ objects | Kitchen items varying in size, shape, material |
65
- | **Environment** | 1 kitchen | Natural light, artificial light, dim lighting |
66
- | **Surfaces** | 3 surfaces | Counter, dining table, shelf |
67
- | **Task Complexity** | Single + Multi-step | Pick-place (single) → Fridge sequence (multi-step) |
68
- | **Clutter** | Natural | Objects on counter, objects in fridge |
69
-
70
- ## Data Format
71
-
72
- This dataset follows the **LeRobot v3.0** format.
73
-
74
- ### Action Space (14D)
75
-
76
- The action space represents delta poses for both hands:
77
-
78
- | Dimensions | Description |
79
- |------------|-------------|
80
- | 0-2 | Left hand position delta (x, y, z) |
81
- | 3-6 | Left hand rotation delta (quaternion x, y, z, w) |
82
- | 7-9 | Right hand position delta (x, y, z) |
83
- | 10-13 | Right hand rotation delta (quaternion x, y, z, w) |
84
-
85
- ### Observation Space
86
-
87
- | Key | Shape | Description |
88
- |-----|-------|-------------|
89
- | `state` | (14,) | Combined left/right wrist poses |
90
- | `left_state` | (7,) | Left wrist pose (position + quaternion) |
91
- | `right_state` | (7,) | Right wrist pose (position + quaternion) |
92
- | `right_joint_poses` | (78,) | Right hand full skeleton (26 joints × 3D position) |
93
- | `left_joint_poses` | (78,) | Left hand full skeleton (26 joints × 3D position) |
94
- | `observation.images.center_eye` | (H, W, 3) | Center eye camera view |
95
- | `observation.action_label` | string | Per-frame action label (e.g. open, pick, place, close) |
96
-
97
- ### Joint Order (26 joints per hand)
98
-
99
- palm, wrist, thumb_metacarpal, thumb_proximal, thumb_distal, thumb_tip, index_metacarpal, index_proximal, index_intermediate, index_distal, index_tip, middle_metacarpal, middle_proximal, middle_intermediate, middle_distal, middle_tip, ring_metacarpal, ring_proximal, ring_intermediate, ring_distal, ring_tip, little_metacarpal, little_proximal, little_intermediate, little_distal, little_tip
100
-
101
- ## Recording Setup
102
-
103
- - **Headset**: Meta Quest 3 (Mixed Reality)
104
- - **Environment**: Real-world kitchen
105
- - **Tracking**: Full skeletal hand tracking (26 joints per hand) via Quest 3 hand tracking
106
- - **Cameras**: 1 camera (center eye view)
107
- - **Software**: Unity-based recording system
108
-
109
- ## File Structure
110
-
111
- ```
112
- .
113
- ├── data/
114
- │ ├── chunk_0000.parquet
115
- │ └── chunk_0001.parquet
116
- ├── meta/
117
- │ ├── info.json
118
- │ ├── stats.json
119
- │ └── episode_index.parquet
120
- ├── videos/
121
- │ └── observation.images.center_eye/
122
- └── README.md
123
- ```
124
-
125
- ## License
126
-
127
- This dataset is released under the Apache 2.0 License.
128
-
129
- ## Citation
130
-
131
- If you use this dataset in your research, please cite:
132
-
133
- ```bibtex
134
- @misc{kitchen_mr_manipulation,
135
- title={Kitchen Mixed Reality Manipulation Dataset},
136
- author={OOJU},
137
- year={2026},
138
- publisher={Hugging Face},
139
- howpublished={\url{https://huggingface.co/datasets/ooju/kitchen-vr-manipulation}}
140
- }
141
- ```
142
-
143
- ## Acknowledgments
144
-
145
- Dataset collected using Mixed Reality hand tracking in real-world kitchen environments. Recording system built with Unity and Meta Quest.
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - robotics
5
+ tags:
6
+ - lerobot
7
+ - robotics
8
+ - mixed-reality
9
+ - bimanual
10
+ - kitchen
11
+ - manipulation
12
+ - imitation-learning
13
+ - demonstration
14
+ configs:
15
+ - config_name: default
16
+ data_files:
17
+ - split: train
18
+ path: data/chunk_*.parquet
19
+ pretty_name: 'WILD-Mani: Kitchen Edition'
20
+ ---
21
+
22
+ # WILD-Mani: A Real-World Bimanual Manipulation Dataset – Kitchen Edition
23
+ The first installment of the WILD-Mani series.
24
+ * High-fidelity bimanual demonstrations collected in real-world kitchen environments using Mixed Reality (MR).
25
+ * Designed to support Sim2Real transfer for bimanual manipulation tasks.
26
+ * Current Domain: Kitchen
27
+
28
+ ## Dataset Description
29
+
30
+ This dataset contains human demonstrations of kitchen manipulation tasks captured using a Mixed Reality headset in **real-world kitchen environments**. The data tracks both hands simultaneously with 6-DOF pose tracking, enabling bimanual robot imitation learning research.
31
+
32
+ ### Key Features
33
+
34
+ - **Real-world environments**: Demonstrations captured in actual kitchens (not simulation)
35
+ - **Mixed Reality capture**: High-fidelity hand tracking in physical spaces
36
+ - **Bimanual manipulation**: Both hands tracked simultaneously
37
+ - **Camera view**: Single camera (center eye) per demonstration
38
+ - **Diverse tasks**: Multiple kitchen manipulation tasks with varying complexity
39
+
40
+ ## Dataset Statistics
41
+
42
+ | Property | Value |
43
+ |----------|-------|
44
+ | **Episodes** | 102 |
45
+ | **Total Frames** | 79,244 |
46
+ | **FPS** | 30 |
47
+ | **Cameras** | 1 |
48
+ | **Environment** | Real kitchen with lighting variations |
49
+
50
+ ## Task Categories
51
+
52
+ | Category | Task | Objects | Actions | Demos |
53
+ |----------|------|---------|---------|-------|
54
+ | Meal Prep | Get item from fridge | Bottle, container | Open, pick, close, place | 24 |
55
+ | Dish Organizing | Place cups on shelf | Cups (2-3) | Pick, place | 21 |
56
+ | Dish Organizing | Put utensils in drawer | Fork, spoon, knife | Pick, open, place, close | 20 |
57
+ | Meal Prep | Set table (plate + utensil) | Plate, fork, knife | Pick, place | 21 |
58
+
59
+ ## Diversity Dimensions
60
+
61
+ | Dimension | Coverage | Details |
62
+ |-----------|----------|---------|
63
+ | **Actions** | 4+ types | Pick, place, open, close |
64
+ | **Objects** | 15+ objects | Kitchen items varying in size, shape, material |
65
+ | **Environment** | 1 kitchen | Natural light, artificial light, dim lighting |
66
+ | **Surfaces** | 3 surfaces | Counter, dining table, shelf |
67
+ | **Task Complexity** | Single + Multi-step | Pick-place (single) → Fridge sequence (multi-step) |
68
+ | **Clutter** | Natural | Objects on counter, objects in fridge |
69
+
70
+ ## Data Format
71
+
72
+ This dataset follows the **LeRobot v3.0** format.
73
+
74
+ ### Action Space (14D)
75
+
76
+ The action space represents delta poses for both hands:
77
+
78
+ | Dimensions | Description |
79
+ |------------|-------------|
80
+ | 0-2 | Left hand position delta (x, y, z) |
81
+ | 3-6 | Left hand rotation delta (quaternion x, y, z, w) |
82
+ | 7-9 | Right hand position delta (x, y, z) |
83
+ | 10-13 | Right hand rotation delta (quaternion x, y, z, w) |
84
+
85
+ ### Observation Space
86
+
87
+ | Key | Shape | Description |
88
+ |-----|-------|-------------|
89
+ | `state` | (14,) | Combined left/right wrist poses |
90
+ | `left_state` | (7,) | Left wrist pose (position + quaternion) |
91
+ | `right_state` | (7,) | Right wrist pose (position + quaternion) |
92
+ | `right_joint_poses` | (78,) | Right hand full skeleton (26 joints × 3D position) |
93
+ | `left_joint_poses` | (78,) | Left hand full skeleton (26 joints × 3D position) |
94
+ | `observation.images.center_eye` | (H, W, 3) | Center eye camera view |
95
+ | `observation.action_label` | string | Per-frame action label (e.g. open, pick, place, close) |
96
+
97
+ ### Joint Order (26 joints per hand)
98
+
99
+ palm, wrist, thumb_metacarpal, thumb_proximal, thumb_distal, thumb_tip, index_metacarpal, index_proximal, index_intermediate, index_distal, index_tip, middle_metacarpal, middle_proximal, middle_intermediate, middle_distal, middle_tip, ring_metacarpal, ring_proximal, ring_intermediate, ring_distal, ring_tip, little_metacarpal, little_proximal, little_intermediate, little_distal, little_tip
100
+
101
+ ## Recording Setup
102
+
103
+ - **Headset**: Meta Quest 3 (Mixed Reality)
104
+ - **Environment**: Real-world kitchen
105
+ - **Tracking**: Full skeletal hand tracking (26 joints per hand) via Quest 3 hand tracking
106
+ - **Cameras**: 1 camera (center eye view)
107
+ - **Software**: Unity-based recording system
108
+
109
+ ## File Structure
110
+
111
+ ```
112
+ .
113
+ ├── data/
114
+ │ ├── chunk_0000.parquet
115
+ │ └── chunk_0001.parquet
116
+ ├── meta/
117
+ │ ├── info.json
118
+ │ ├── stats.json
119
+ │ └── episode_index.parquet
120
+ ├── videos/
121
+ │ └── observation.images.center_eye/
122
+ └── README.md
123
+ ```
124
+
125
+ ## License
126
+
127
+ This dataset is released under the Apache 2.0 License.
128
+
129
+ ## Citation
130
+
131
+ If you use this dataset in your research, please cite:
132
+
133
+ ```bibtex
134
+ @misc{kitchen_mr_manipulation,
135
+ title={Kitchen Mixed Reality Manipulation Dataset},
136
+ author={OOJU},
137
+ year={2026},
138
+ publisher={Hugging Face},
139
+ howpublished={\url{https://huggingface.co/datasets/ooju/kitchen-vr-manipulation}}
140
+ }
141
+ ```
142
+
143
+ ## Acknowledgments
144
+
145
+ Dataset collected using Mixed Reality hand tracking in real-world kitchen environments. Recording system built with Unity and Meta Quest.
data/chunk_0000.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f292ca99a5c6ac65aa8c158579121bfadec14d0b709339b7270c112eccd21128
3
- size 88828858
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60535fad27db1d3d7bbb9f5b38d9ac38fc3a3edf7272362bed33ea6aeb630b22
3
+ size 93972469
data/chunk_0001.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:50fc938c8c8113952f8bc485aa98b5057c9bd70c91c6e024124633f5b1ed1510
3
- size 2906244
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a0a42f39e8cb2e6a50a02145f2cf89be804fcd3c3a86666f8758430feb4e014
3
+ size 3053501
meta/info.json CHANGED
@@ -131,21 +131,45 @@
131
  0.9312140941619873
132
  ]
133
  },
134
- "observation.images.center_eye": {
135
- "type": "Video",
136
  "shape": [
137
- null,
138
- null,
139
- 3
140
  ],
141
- "dtype": "uint8"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
142
  },
143
- "observation.action_label": {
144
- "dtype": "string",
145
  "shape": [
146
- 1
 
 
 
 
 
147
  ],
148
- "names": null
 
 
 
 
149
  }
150
  }
151
  }
 
131
  0.9312140941619873
132
  ]
133
  },
134
+ "observation.camera_pose": {
135
+ "type": "Box",
136
  "shape": [
137
+ 7
 
 
138
  ],
139
+ "low": [
140
+ -Infinity,
141
+ -Infinity,
142
+ -Infinity,
143
+ -Infinity,
144
+ -Infinity,
145
+ -Infinity,
146
+ -Infinity
147
+ ],
148
+ "high": [
149
+ Infinity,
150
+ Infinity,
151
+ Infinity,
152
+ Infinity,
153
+ Infinity,
154
+ Infinity,
155
+ Infinity
156
+ ]
157
  },
158
+ "observation.headset_position": {
159
+ "type": "Box",
160
  "shape": [
161
+ 3
162
+ ],
163
+ "low": [
164
+ -Infinity,
165
+ -Infinity,
166
+ -Infinity
167
  ],
168
+ "high": [
169
+ Infinity,
170
+ Infinity,
171
+ Infinity
172
+ ]
173
  }
174
  }
175
  }