exptech commited on
Commit
426914e
·
verified ·
1 Parent(s): 1eac24e

Add video2robot + MOVIN Studio to pipeline, fix Motion-Player-ROS links, add V_Rocamena, 60 clips

Browse files
Files changed (3) hide show
  1. README.md +4 -2
  2. data.json +63 -10
  3. index.html +28 -10
README.md CHANGED
@@ -10,7 +10,7 @@ license: cc-by-4.0
10
 
11
  # G1 Moves
12
 
13
- 59 motion capture clips for the Unitree G1 humanoid robot (29 DOF, mode 15). Dance, karate, fencing, and more — captured with MOVIN TRACIN, retargeted with Motion-Player-ROS, trained with mjlab, and deployed via RoboJuDo.
14
 
15
  Browse the interactive gallery above to view trained RL policies running in real-time in your browser via MuJoCo WASM + ONNX Runtime.
16
 
@@ -22,9 +22,11 @@ Browse the interactive gallery above to view trained RL policies running in real
22
 
23
  Motion is captured with the [MOVIN TRACIN](https://movin3d.com/product/movin-tracin/) markerless mocap system (LiDAR + vision, 60 FPS) and recorded in [MOVIN Studio](https://www.movin3d.com/studio). Output is BVH format with a 51-joint humanoid skeleton.
24
 
 
 
25
  ### 2. Retarget
26
 
27
- Each BVH clip is retargeted to the Unitree G1's 29-DOF joint space using per-frame inverse kinematics via [movin_sdk_python](https://movin3d.com/) and a custom [ROS 2 retargeting node](https://github.com/ExperientialTechnologies/Motion-Player-ROS). The node provides real-time preview with dual BVH skeleton + robot visualization in RViz, and supports both playback of pre-recorded `.pkl` files and live retargeting from MOVIN TRACIN data via OSC/UDP.
28
 
29
  ```bash
30
  # Playback mode
 
10
 
11
  # G1 Moves
12
 
13
+ 60 motion capture clips for the Unitree G1 humanoid robot (29 DOF, mode 15). Dance, karate, fencing, and more — captured with MOVIN TRACIN, retargeted with Motion-Player-ROS, trained with mjlab, and deployed via RoboJuDo.
14
 
15
  Browse the interactive gallery above to view trained RL policies running in real-time in your browser via MuJoCo WASM + ONNX Runtime.
16
 
 
22
 
23
  Motion is captured with the [MOVIN TRACIN](https://movin3d.com/product/movin-tracin/) markerless mocap system (LiDAR + vision, 60 FPS) and recorded in [MOVIN Studio](https://www.movin3d.com/studio). Output is BVH format with a 51-joint humanoid skeleton.
24
 
25
+ Alternatively, motions can be extracted from any video (YouTube, phone, AI-generated) using [video2robot](https://github.com/AIM-Intelligence/video2robot), which runs PromptHMR pose extraction followed by [GMR](https://github.com/YanjieZe/GMR) inverse kinematics retargeting — no mocap hardware needed.
26
+
27
  ### 2. Retarget
28
 
29
+ Each BVH clip is retargeted to the Unitree G1's 29-DOF joint space using per-frame inverse kinematics via [MOVIN-SDK-Python](https://github.com/MOVIN3D/MOVIN-SDK-Python) and a custom [ROS 2 retargeting node](https://github.com/MOVIN3D/Motion-Player-ROS). The node provides real-time preview with dual BVH skeleton + robot visualization in RViz, and supports both playback of pre-recorded `.pkl` files and live retargeting from MOVIN TRACIN data via OSC/UDP.
30
 
31
  ```bash
32
  # Playback mode
data.json CHANGED
@@ -1083,9 +1083,13 @@
1083
  "training": {
1084
  "gif": "karate/M_Move17/training/M_Move17_training.gif",
1085
  "mp4": "karate/M_Move17/training/M_Move17_training.mp4"
 
 
 
 
1086
  }
1087
  },
1088
- "has_policy": false,
1089
  "has_onnx": true
1090
  },
1091
  {
@@ -1158,9 +1162,13 @@
1158
  "training": {
1159
  "gif": "karate/M_Move2/training/M_Move2_training.gif",
1160
  "mp4": "karate/M_Move2/training/M_Move2_training.mp4"
 
 
 
 
1161
  }
1162
  },
1163
- "has_policy": false,
1164
  "has_onnx": true
1165
  },
1166
  {
@@ -1208,9 +1216,13 @@
1208
  "training": {
1209
  "gif": "karate/M_Move3/training/M_Move3_training.gif",
1210
  "mp4": "karate/M_Move3/training/M_Move3_training.mp4"
 
 
 
 
1211
  }
1212
  },
1213
- "has_policy": false,
1214
  "has_onnx": true
1215
  },
1216
  {
@@ -1233,9 +1245,13 @@
1233
  "training": {
1234
  "gif": "karate/M_Move4/training/M_Move4_training.gif",
1235
  "mp4": "karate/M_Move4/training/M_Move4_training.mp4"
 
 
 
 
1236
  }
1237
  },
1238
- "has_policy": false,
1239
  "has_onnx": true
1240
  },
1241
  {
@@ -1258,9 +1274,13 @@
1258
  "training": {
1259
  "gif": "karate/M_Move5/training/M_Move5_training.gif",
1260
  "mp4": "karate/M_Move5/training/M_Move5_training.mp4"
 
 
 
 
1261
  }
1262
  },
1263
- "has_policy": false,
1264
  "has_onnx": true
1265
  },
1266
  {
@@ -1283,9 +1303,13 @@
1283
  "training": {
1284
  "gif": "karate/M_Move6/training/M_Move6_training.gif",
1285
  "mp4": "karate/M_Move6/training/M_Move6_training.mp4"
 
 
 
 
1286
  }
1287
  },
1288
- "has_policy": false,
1289
  "has_onnx": true
1290
  },
1291
  {
@@ -1358,9 +1382,13 @@
1358
  "training": {
1359
  "gif": "karate/M_Move9/training/M_Move9_training.gif",
1360
  "mp4": "karate/M_Move9/training/M_Move9_training.mp4"
 
 
 
 
1361
  }
1362
  },
1363
- "has_policy": false,
1364
  "has_onnx": true
1365
  },
1366
  {
@@ -1603,13 +1631,38 @@
1603
  },
1604
  "has_policy": true,
1605
  "has_onnx": true
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1606
  }
1607
  ],
1608
  "stats": {
1609
  "dance": 28,
1610
  "karate": 27,
1611
- "bonus": 4,
1612
- "policies": 32,
1613
- "total": 59
1614
  }
1615
  }
 
1083
  "training": {
1084
  "gif": "karate/M_Move17/training/M_Move17_training.gif",
1085
  "mp4": "karate/M_Move17/training/M_Move17_training.mp4"
1086
+ },
1087
+ "policy": {
1088
+ "gif": "karate/M_Move17/policy/M_Move17_policy.gif",
1089
+ "mp4": "karate/M_Move17/policy/M_Move17_policy.mp4"
1090
  }
1091
  },
1092
+ "has_policy": true,
1093
  "has_onnx": true
1094
  },
1095
  {
 
1162
  "training": {
1163
  "gif": "karate/M_Move2/training/M_Move2_training.gif",
1164
  "mp4": "karate/M_Move2/training/M_Move2_training.mp4"
1165
+ },
1166
+ "policy": {
1167
+ "gif": "karate/M_Move2/policy/M_Move2_policy.gif",
1168
+ "mp4": "karate/M_Move2/policy/M_Move2_policy.mp4"
1169
  }
1170
  },
1171
+ "has_policy": true,
1172
  "has_onnx": true
1173
  },
1174
  {
 
1216
  "training": {
1217
  "gif": "karate/M_Move3/training/M_Move3_training.gif",
1218
  "mp4": "karate/M_Move3/training/M_Move3_training.mp4"
1219
+ },
1220
+ "policy": {
1221
+ "gif": "karate/M_Move3/policy/M_Move3_policy.gif",
1222
+ "mp4": "karate/M_Move3/policy/M_Move3_policy.mp4"
1223
  }
1224
  },
1225
+ "has_policy": true,
1226
  "has_onnx": true
1227
  },
1228
  {
 
1245
  "training": {
1246
  "gif": "karate/M_Move4/training/M_Move4_training.gif",
1247
  "mp4": "karate/M_Move4/training/M_Move4_training.mp4"
1248
+ },
1249
+ "policy": {
1250
+ "gif": "karate/M_Move4/policy/M_Move4_policy.gif",
1251
+ "mp4": "karate/M_Move4/policy/M_Move4_policy.mp4"
1252
  }
1253
  },
1254
+ "has_policy": true,
1255
  "has_onnx": true
1256
  },
1257
  {
 
1274
  "training": {
1275
  "gif": "karate/M_Move5/training/M_Move5_training.gif",
1276
  "mp4": "karate/M_Move5/training/M_Move5_training.mp4"
1277
+ },
1278
+ "policy": {
1279
+ "gif": "karate/M_Move5/policy/M_Move5_policy.gif",
1280
+ "mp4": "karate/M_Move5/policy/M_Move5_policy.mp4"
1281
  }
1282
  },
1283
+ "has_policy": true,
1284
  "has_onnx": true
1285
  },
1286
  {
 
1303
  "training": {
1304
  "gif": "karate/M_Move6/training/M_Move6_training.gif",
1305
  "mp4": "karate/M_Move6/training/M_Move6_training.mp4"
1306
+ },
1307
+ "policy": {
1308
+ "gif": "karate/M_Move6/policy/M_Move6_policy.gif",
1309
+ "mp4": "karate/M_Move6/policy/M_Move6_policy.mp4"
1310
  }
1311
  },
1312
+ "has_policy": true,
1313
  "has_onnx": true
1314
  },
1315
  {
 
1382
  "training": {
1383
  "gif": "karate/M_Move9/training/M_Move9_training.gif",
1384
  "mp4": "karate/M_Move9/training/M_Move9_training.mp4"
1385
+ },
1386
+ "policy": {
1387
+ "gif": "karate/M_Move9/policy/M_Move9_policy.gif",
1388
+ "mp4": "karate/M_Move9/policy/M_Move9_policy.mp4"
1389
  }
1390
  },
1391
+ "has_policy": true,
1392
  "has_onnx": true
1393
  },
1394
  {
 
1631
  },
1632
  "has_policy": true,
1633
  "has_onnx": true
1634
+ },
1635
+ {
1636
+ "id": "V_Rocamena",
1637
+ "name": "Rocamena",
1638
+ "category": "bonus",
1639
+ "performer": "YouTube",
1640
+ "duration": 9.5,
1641
+ "fps": 30,
1642
+ "frames": 285,
1643
+ "stages": {
1644
+ "capture": {
1645
+ "gif": "bonus/V_Rocamena/capture/V_Rocamena.gif",
1646
+ "mp4": "bonus/V_Rocamena/capture/V_Rocamena.mp4"
1647
+ },
1648
+ "retarget": {
1649
+ "gif": "bonus/V_Rocamena/retarget/V_Rocamena_retarget.gif",
1650
+ "mp4": "bonus/V_Rocamena/retarget/V_Rocamena_retarget.mp4"
1651
+ },
1652
+ "training": {
1653
+ "gif": "bonus/V_Rocamena/training/V_Rocamena_training.gif",
1654
+ "mp4": "bonus/V_Rocamena/training/V_Rocamena_training.mp4"
1655
+ }
1656
+ },
1657
+ "has_policy": false,
1658
+ "has_onnx": false
1659
  }
1660
  ],
1661
  "stats": {
1662
  "dance": 28,
1663
  "karate": 27,
1664
+ "bonus": 5,
1665
+ "policies": 39,
1666
+ "total": 60
1667
  }
1668
  }
index.html CHANGED
@@ -4,7 +4,7 @@
4
  <meta charset="UTF-8">
5
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
  <title>G1 Moves — Motion Capture for Unitree G1</title>
7
- <meta name="description" content="59 motion capture clips for the Unitree G1 humanoid robot. Dance, karate, and more — captured with MOVIN3D, trained with MuJoCo-Warp.">
8
 
9
  <link rel="preconnect" href="https://fonts.googleapis.com">
10
  <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
@@ -20,7 +20,7 @@
20
  <div class="hero-content">
21
  <img src="logo.png" alt="Experiential Technologies" class="logo" width="260">
22
  <h1 class="hero-title">G1 MOVES</h1>
23
- <p class="hero-subtitle">59 motion capture clips for the Unitree G1 humanoid robot</p>
24
 
25
  <div class="pipeline-strip">
26
  <div class="pipeline-stage">
@@ -93,13 +93,13 @@
93
  <h2>ABOUT</h2>
94
  <div class="about-columns">
95
  <div class="about-text">
96
- <p>G1 Moves is a dataset of 59 motion capture clips captured with the
97
  <a href="https://movin3d.com/" target="_blank" rel="noopener">MOVIN TRACIN</a>
98
  markerless capture system (LiDAR + vision, 60 FPS). Each clip is retargeted to the Unitree G1 humanoid robot
99
  (29 DOF, mode 15) using inverse kinematics via the
100
- <a href="https://movin3d.com/" target="_blank" rel="noopener">movin_sdk_python</a>
101
  library and a custom
102
- <a href="https://github.com/ExperientialTechnologies/Motion-Player-ROS" target="_blank" rel="noopener">ROS 2 retargeting node</a>,
103
  then trained as reinforcement learning policies using
104
  <a href="https://github.com/mujocolab/mjlab" target="_blank" rel="noopener">mjlab</a>
105
  (PPO on GPU-accelerated
@@ -145,13 +145,19 @@
145
  <section class="deploy reveal">
146
  <div class="about-inner">
147
  <h2>PIPELINE</h2>
148
- <p class="deploy-intro">The complete motion-to-policy pipeline uses three open-source tools. Every trained policy is available as an ONNX model (160-dim input, 29-dim output) with baked-in observation normalization.</p>
149
  <div class="deploy-options">
150
  <div class="deploy-option">
151
  <div class="deploy-num">01</div>
 
 
 
 
 
 
152
  <h3>Motion-Player-ROS</h3>
153
  <p>ROS 2 package for retargeting and previewing motion capture on the G1. Supports both playback of pre-recorded <code>.pkl</code> files and real-time retargeting from live MOVIN TRACIN data via OSC/UDP. Dual visualization shows the original human BVH skeleton alongside the retargeted robot motion in RViz.</p>
154
- <p><a href="https://github.com/ExperientialTechnologies/Motion-Player-ROS" target="_blank" rel="noopener">github.com/ExperientialTechnologies/Motion-Player-ROS</a></p>
155
  <pre><code># Playback mode
156
  ros2 launch motion_player player.launch.py \
157
  motion_file:=clip.pkl bvh_file:=clip.bvh
@@ -161,7 +167,19 @@ ros2 launch motion_player realtime.launch.py \
161
  port:=9000 human_height:=1.75</code></pre>
162
  </div>
163
  <div class="deploy-option">
164
- <div class="deploy-num">02</div>
 
 
 
 
 
 
 
 
 
 
 
 
165
  <h3>mjlab</h3>
166
  <p>GPU-accelerated RL training framework combining Isaac Lab's manager-based API with <a href="https://github.com/google-deepmind/mujoco_warp" target="_blank" rel="noopener">MuJoCo-Warp</a> simulation. Trains PPO policies across 8,192 parallel environments on a single GPU. Motion imitation uses 14-body tracking with reward shaping for position, orientation, velocity, and collision avoidance.</p>
167
  <p><a href="https://github.com/mujocolab/mjlab" target="_blank" rel="noopener">github.com/mujocolab/mjlab</a></p>
@@ -172,7 +190,7 @@ uv run train Mjlab-Tracking-Flat-Unitree-G1 \
172
  --agent.max-iterations 30000</code></pre>
173
  </div>
174
  <div class="deploy-option">
175
- <div class="deploy-num">03</div>
176
  <h3>RoboJuDo</h3>
177
  <p>Plug-and-play sim2real deployment framework for humanoid robots. Modular architecture separates controller (joystick/keyboard/motion), environment (MuJoCo sim or real robot via Unitree SDK), and policy (ONNX/JIT). Supports seamless switching between sim2sim validation and real hardware with minimal code changes.</p>
178
  <p><a href="https://github.com/GDDG08/RoboJuDo" target="_blank" rel="noopener">github.com/GDDG08/RoboJuDo</a></p>
@@ -285,7 +303,7 @@ python scripts/run_pipeline.py --config=g1_real</code></pre>
285
  </div>
286
  </div>
287
  <div class="equip-block">
288
- <h3>Inference</h3>
289
  <div class="equip-name"><a href="https://www.dell.com/en-us/shop/desktop-computers/dell-pro-max-desktop-with-nvidia-gb10/spd/dell-pro-max-gb10-desktop/usepmgb10bts" target="_blank" rel="noopener">Dell Pro Max with GB10</a></div>
290
  <div class="equipment-grid">
291
  <div class="equipment-item">
 
4
  <meta charset="UTF-8">
5
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
  <title>G1 Moves — Motion Capture for Unitree G1</title>
7
+ <meta name="description" content="60 motion capture clips for the Unitree G1 humanoid robot. Dance, karate, and more — captured with MOVIN3D, trained with MuJoCo-Warp.">
8
 
9
  <link rel="preconnect" href="https://fonts.googleapis.com">
10
  <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
 
20
  <div class="hero-content">
21
  <img src="logo.png" alt="Experiential Technologies" class="logo" width="260">
22
  <h1 class="hero-title">G1 MOVES</h1>
23
+ <p class="hero-subtitle">60 motion capture clips for the Unitree G1 humanoid robot</p>
24
 
25
  <div class="pipeline-strip">
26
  <div class="pipeline-stage">
 
93
  <h2>ABOUT</h2>
94
  <div class="about-columns">
95
  <div class="about-text">
96
+ <p>G1 Moves is a dataset of 60 motion capture clips captured with the
97
  <a href="https://movin3d.com/" target="_blank" rel="noopener">MOVIN TRACIN</a>
98
  markerless capture system (LiDAR + vision, 60 FPS). Each clip is retargeted to the Unitree G1 humanoid robot
99
  (29 DOF, mode 15) using inverse kinematics via the
100
+ <a href="https://github.com/MOVIN3D/MOVIN-SDK-Python" target="_blank" rel="noopener">MOVIN-SDK-Python</a>
101
  library and a custom
102
+ <a href="https://github.com/MOVIN3D/Motion-Player-ROS" target="_blank" rel="noopener">ROS 2 retargeting node</a>,
103
  then trained as reinforcement learning policies using
104
  <a href="https://github.com/mujocolab/mjlab" target="_blank" rel="noopener">mjlab</a>
105
  (PPO on GPU-accelerated
 
145
  <section class="deploy reveal">
146
  <div class="about-inner">
147
  <h2>PIPELINE</h2>
148
+ <p class="deploy-intro">The complete motion-to-policy pipeline uses five open-source tools. Every trained policy is available as an ONNX model (160-dim input, 29-dim output) with baked-in observation normalization.</p>
149
  <div class="deploy-options">
150
  <div class="deploy-option">
151
  <div class="deploy-num">01</div>
152
+ <h3>MOVIN Studio</h3>
153
+ <p>Recording and processing software for MOVIN TRACIN markerless mocap. Captures full-body motion in real-time using LiDAR + vision at 60 FPS. Exports BVH with 51-joint humanoid skeleton, plus FBX for Blender, Maya, Unreal, and Unity.</p>
154
+ <p><a href="https://www.movin3d.com/studio" target="_blank" rel="noopener">movin3d.com/studio</a></p>
155
+ </div>
156
+ <div class="deploy-option">
157
+ <div class="deploy-num">02</div>
158
  <h3>Motion-Player-ROS</h3>
159
  <p>ROS 2 package for retargeting and previewing motion capture on the G1. Supports both playback of pre-recorded <code>.pkl</code> files and real-time retargeting from live MOVIN TRACIN data via OSC/UDP. Dual visualization shows the original human BVH skeleton alongside the retargeted robot motion in RViz.</p>
160
+ <p><a href="https://github.com/MOVIN3D/Motion-Player-ROS" target="_blank" rel="noopener">github.com/MOVIN3D/Motion-Player-ROS</a></p>
161
  <pre><code># Playback mode
162
  ros2 launch motion_player player.launch.py \
163
  motion_file:=clip.pkl bvh_file:=clip.bvh
 
167
  port:=9000 human_height:=1.75</code></pre>
168
  </div>
169
  <div class="deploy-option">
170
+ <div class="deploy-num">03</div>
171
+ <h3>video2robot</h3>
172
+ <p>End-to-end pipeline converting any video to robot motion. Extracts 3D human pose via <a href="https://github.com/AIM-Intelligence/video2robot" target="_blank" rel="noopener">PromptHMR</a> (SMPL-X), then retargets to the G1's 29-DOF joint space using <a href="https://github.com/YanjieZe/GMR" target="_blank" rel="noopener">GMR</a> inverse kinematics. Works with YouTube, phone video, or AI-generated footage — no mocap hardware needed.</p>
173
+ <p><a href="https://github.com/AIM-Intelligence/video2robot" target="_blank" rel="noopener">github.com/AIM-Intelligence/video2robot</a></p>
174
+ <pre><code># Extract pose from video
175
+ python scripts/extract_pose.py --project data/clip
176
+
177
+ # Retarget to G1
178
+ python scripts/convert_to_robot.py \
179
+ --project data/clip --robot unitree_g1</code></pre>
180
+ </div>
181
+ <div class="deploy-option">
182
+ <div class="deploy-num">04</div>
183
  <h3>mjlab</h3>
184
  <p>GPU-accelerated RL training framework combining Isaac Lab's manager-based API with <a href="https://github.com/google-deepmind/mujoco_warp" target="_blank" rel="noopener">MuJoCo-Warp</a> simulation. Trains PPO policies across 8,192 parallel environments on a single GPU. Motion imitation uses 14-body tracking with reward shaping for position, orientation, velocity, and collision avoidance.</p>
185
  <p><a href="https://github.com/mujocolab/mjlab" target="_blank" rel="noopener">github.com/mujocolab/mjlab</a></p>
 
190
  --agent.max-iterations 30000</code></pre>
191
  </div>
192
  <div class="deploy-option">
193
+ <div class="deploy-num">05</div>
194
  <h3>RoboJuDo</h3>
195
  <p>Plug-and-play sim2real deployment framework for humanoid robots. Modular architecture separates controller (joystick/keyboard/motion), environment (MuJoCo sim or real robot via Unitree SDK), and policy (ONNX/JIT). Supports seamless switching between sim2sim validation and real hardware with minimal code changes.</p>
196
  <p><a href="https://github.com/GDDG08/RoboJuDo" target="_blank" rel="noopener">github.com/GDDG08/RoboJuDo</a></p>
 
303
  </div>
304
  </div>
305
  <div class="equip-block">
306
+ <h3>Training</h3>
307
  <div class="equip-name"><a href="https://www.dell.com/en-us/shop/desktop-computers/dell-pro-max-desktop-with-nvidia-gb10/spd/dell-pro-max-gb10-desktop/usepmgb10bts" target="_blank" rel="noopener">Dell Pro Max with GB10</a></div>
308
  <div class="equipment-grid">
309
  <div class="equipment-item">