wuyan01 commited on
Commit
ab10d5f
·
1 Parent(s): 2862740

update readme and replay script

Browse files
README.md CHANGED
@@ -7,4 +7,64 @@ tags:
7
  - imitation-learning
8
  size_categories:
9
  - 10K<n<100K
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - imitation-learning
8
  size_categories:
9
  - 10K<n<100K
10
+ ---
11
+
12
+ # UniPhys Dataset: Offline Dataset for Physics-based Character Control
13
+
14
+ This dataset is part of the [UniPhys](https://wuyan01.github.io/uniphys-project/), enabling large-scale training of diffusion policies for physics-based humanoid control using SMPL-like characters. The state-action pairs are generated by the [PULSE](https://www.zhengyiluo.com/PULSE-Site/) motion tracking policy.
15
+
16
+ ## Dataset Overview
17
+ * `amass_state-action-pairs`: state-action pairs for motion sequences from AMASS dataset (excluding infeasible motions)
18
+ * `babel_state-action-text-pairs`: Packaged AMASS motions with [BABEL](https://babel.is.tue.mpg.de/) frame-level text annotations.
19
+
20
+ ### AMASS state-action pairs
21
+
22
+ #### Data Structure
23
+ For each sequence, the dataset contains:
24
+ | Field | Shape | Description |
25
+ |-------|-------|-------------|
26
+ | `body_pos` | `[T, 24, 3]` | Joint positions in global space |
27
+ | `dof_state` | `[T, 69, 2]` | Joint rotations (dim 0) and velocities (dim 1)<br>*69 = 23 joints × 3 DoF each* |
28
+ | `root_state` | `[T, 13]` | Contains:<br>- Position (0:3)<br>- Quaternion (3:7)<br>- Linear velocity (7:10)<br>- Angular velocity (10:13) |
29
+ | `action` | `[T, 69]` | Joint angle targets (23 joints × 3 DoF) |
30
+ | `pulse_z` | `[T, 32]` | Latent action space from PULSE policy |
31
+ | `is_succ` | `bool` | Tracking success flag (True/False) |
32
+ | `fps` | `int` | Frame rate (30 FPS) |
33
+
34
+ #### Visualization
35
+ To replay the sequence:
36
+ ```
37
+ python replay_amass_state_action_pairs.py --load_motion_path amass_state-action-pairs/$YOUR_FILE_PATH
38
+ ```
39
+
40
+ ### BABEL state-action-text pairs
41
+ This is the training dataset used in [UniPhys](https://wuyan01.github.io/uniphys-project/).
42
+
43
+ #### Visualization
44
+ To replay the packaged offline BABEL dataset along with frame-level text annotation:
45
+ ```
46
+ python replay_babel_state_action_text_pairs.py --load_motion_path babel_state-action-text-pairs/babel_train.pkl
47
+ ```
48
+
49
+ ## Citation
50
+ If using this dataset useful, please cite:
51
+ ```
52
+ @inproceedings{wu2025uniphys,
53
+ title={UniPhys: Unified Planner and Controller with Diffusion for Flexible Physics-Based Character Control},
54
+ author={Wu, Yan and Karunratanakul, Korrawe and Luo, Zhengyi and Tang, Siyu},
55
+ booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
56
+ year={2025}
57
+ }
58
+
59
+ ```
60
+
61
+ ```
62
+ @inproceedings{
63
+ luo2024universal,
64
+ title={Universal Humanoid Motion Representations for Physics-Based Control},
65
+ author={Zhengyi Luo and Jinkun Cao and Josh Merel and Alexander Winkler and Jing Huang and Kris M. Kitani and Weipeng Xu},
66
+ booktitle={The Twelfth International Conference on Learning Representations},
67
+ year={2024},
68
+ url={https://openreview.net/forum?id=OrOd8PxOO2}
69
+ }
70
+ ```
replay.py → replay_amass_state_action_pairs.py RENAMED
@@ -92,7 +92,7 @@ if viewer is None:
92
  # ---------------------------------------------------------
93
  # Load asset
94
  # ---------------------------------------------------------
95
- asset_root = "./assets"
96
  asset_file = asset_descriptors[args.asset_id].file_name
97
 
98
  asset_options = gymapi.AssetOptions()
@@ -158,8 +158,7 @@ cam_target = gymapi.Vec3(0, 0, 1)
158
  gym.viewer_camera_look_at(viewer, None, cam_pos, cam_target)
159
 
160
  time_step = 0
161
- fps = 60.0
162
- dt = 1.0 / fps
163
 
164
  print("Starting playback...")
165
 
@@ -180,5 +179,10 @@ while not gym.query_viewer_has_closed(viewer):
180
 
181
  time_step += 1
182
 
 
 
 
 
 
183
  gym.destroy_viewer(viewer)
184
  gym.destroy_sim(sim)
 
92
  # ---------------------------------------------------------
93
  # Load asset
94
  # ---------------------------------------------------------
95
+ asset_root = "./assets/"
96
  asset_file = asset_descriptors[args.asset_id].file_name
97
 
98
  asset_options = gymapi.AssetOptions()
 
158
  gym.viewer_camera_look_at(viewer, None, cam_pos, cam_target)
159
 
160
  time_step = 0
161
+ fps = 30.0
 
162
 
163
  print("Starting playback...")
164
 
 
179
 
180
  time_step += 1
181
 
182
+ print("\r" + " " * 200, end="")
183
+ print(f"\rTime step: {motion_time} / {motion_length}", end="")
184
+
185
+ import time; time.sleep(1.0 / fps)
186
+
187
  gym.destroy_viewer(viewer)
188
  gym.destroy_sim(sim)
replay_babel_state_action_text_pairs.py ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import joblib
3
+ import numpy as np
4
+ from isaacgym import gymapi, gymutil, gymtorch
5
+ import torch
6
+
7
+ # ---------------------------------------------------------
8
+ # Asset description class
9
+ # ---------------------------------------------------------
10
+ class AssetDesc:
11
+ def __init__(self, file_name, flip_visual_attachments=False):
12
+ self.file_name = file_name
13
+ self.flip_visual_attachments = flip_visual_attachments
14
+
15
+ # ---------------------------------------------------------
16
+ # Define assets
17
+ # ---------------------------------------------------------
18
+ asset_descriptors = [AssetDesc("smpl_humanoid.xml", False)]
19
+
20
+ # ---------------------------------------------------------
21
+ # Parse arguments
22
+ # ---------------------------------------------------------
23
+ args = gymutil.parse_arguments(
24
+ description="Visualize motion sequence in Isaac Gym",
25
+ custom_parameters=[
26
+ {
27
+ "name": "--asset_id",
28
+ "type": int,
29
+ "default": 0,
30
+ "help": f"Asset id (0 - {len(asset_descriptors) - 1})",
31
+ },
32
+ {
33
+ "name": "--show_axis",
34
+ "action": "store_true",
35
+ "help": "Visualize DOF axis",
36
+ },
37
+ {
38
+ "name": "--load_motion_path",
39
+ "type": str,
40
+ "default": "./CMU/01/01_01_poses.pkl",
41
+ "help": "Path to motion pickle file",
42
+ },
43
+ ],
44
+ )
45
+
46
+ if not (0 <= args.asset_id < len(asset_descriptors)):
47
+ print(f"*** Invalid asset_id specified. Valid range is 0 to {len(asset_descriptors) - 1}")
48
+ quit()
49
+
50
+
51
+ # ---------------------------------------------------------
52
+ # Initialize simulator
53
+ # ---------------------------------------------------------
54
+ gym = gymapi.acquire_gym()
55
+
56
+ sim_params = gymapi.SimParams()
57
+ sim_params.dt = 1.0 / 60.0
58
+ sim_params.up_axis = gymapi.UP_AXIS_Z
59
+ sim_params.gravity = gymapi.Vec3(0.0, 0.0, -9.81)
60
+
61
+ if args.physics_engine == gymapi.SIM_PHYSX:
62
+ sim_params.physx.solver_type = 1
63
+ sim_params.physx.num_position_iterations = 6
64
+ sim_params.physx.num_threads = args.num_threads
65
+ sim_params.physx.use_gpu = args.use_gpu
66
+ sim_params.use_gpu_pipeline = args.use_gpu_pipeline
67
+
68
+ if not args.use_gpu_pipeline:
69
+ print("WARNING: Forcing CPU pipeline.")
70
+
71
+ sim = gym.create_sim(args.compute_device_id, args.graphics_device_id, args.physics_engine, sim_params)
72
+ if sim is None:
73
+ print("*** Failed to create sim")
74
+ quit()
75
+
76
+
77
+ # ---------------------------------------------------------
78
+ # Ground and viewer setup
79
+ # ---------------------------------------------------------
80
+ plane_params = gymapi.PlaneParams()
81
+ plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
82
+ gym.add_ground(sim, plane_params)
83
+
84
+ viewer = gym.create_viewer(sim, gymapi.CameraProperties())
85
+ if viewer is None:
86
+ print("*** Failed to create viewer")
87
+ quit()
88
+
89
+
90
+ # ---------------------------------------------------------
91
+ # Load asset
92
+ # ---------------------------------------------------------
93
+ asset_root = "./assets/"
94
+ asset_file = asset_descriptors[args.asset_id].file_name
95
+
96
+ asset_options = gymapi.AssetOptions()
97
+ asset_options.use_mesh_materials = True
98
+
99
+ print(f"Loading asset '{asset_file}' from '{asset_root}'")
100
+ asset = gym.load_asset(sim, asset_root, asset_file, asset_options)
101
+
102
+
103
+ # ---------------------------------------------------------
104
+ # Create environment
105
+ # ---------------------------------------------------------
106
+ num_envs = 1
107
+ num_per_row = 1
108
+ spacing = 5.0
109
+
110
+ env_lower = gymapi.Vec3(-spacing, -spacing, 0)
111
+ env_upper = gymapi.Vec3(spacing, spacing, spacing)
112
+
113
+ envs, actor_handles = [], []
114
+ num_dofs = gym.get_asset_dof_count(asset)
115
+
116
+ print(f"Creating {num_envs} environment(s)")
117
+ for i in range(num_envs):
118
+ env = gym.create_env(sim, env_lower, env_upper, num_per_row)
119
+ envs.append(env)
120
+
121
+ pose = gymapi.Transform()
122
+ actor_handle = gym.create_actor(env, asset, pose, "actor", i, 1)
123
+ actor_handles.append(actor_handle)
124
+
125
+ dof_states = np.zeros(num_dofs, dtype=gymapi.DofState.dtype)
126
+ gym.set_actor_dof_states(env, actor_handle, dof_states, gymapi.STATE_ALL)
127
+
128
+ gym.prepare_sim(sim)
129
+
130
+ # ---------------------------------------------------------
131
+ # Load motion sequence
132
+ # ---------------------------------------------------------
133
+ load_motion_path = args.load_motion_path
134
+ assert os.path.exists(load_motion_path), f"Motion file not found: {load_motion_path}"
135
+
136
+ motion = joblib.load(load_motion_path)
137
+ batch_size = len(motion["root_state_all"])
138
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
139
+
140
+ print(f"Loaded {batch_size} motion from {load_motion_path}")
141
+
142
+
143
+ # ---------------------------------------------------------
144
+ # Simulation loop
145
+ # ---------------------------------------------------------
146
+ rigidbody_state = gymtorch.wrap_tensor(gym.acquire_rigid_body_state_tensor(sim)).reshape(num_envs, -1, 13)
147
+ actor_root_state = gymtorch.wrap_tensor(gym.acquire_actor_root_state_tensor(sim))
148
+
149
+ cam_pos = gymapi.Vec3(0, -5.0, 3)
150
+ cam_target = gymapi.Vec3(0, 0, 1)
151
+ gym.viewer_camera_look_at(viewer, None, cam_pos, cam_target)
152
+
153
+ time_step = 0
154
+ fps = 30.0
155
+
156
+ print("Starting playback...")
157
+
158
+ while not gym.query_viewer_has_closed(viewer):
159
+
160
+ for b in range(batch_size):
161
+
162
+ time_step = 0
163
+ motion_length = len(motion["root_state_all"][b])
164
+ motion_name = motion["motion_file"][b].split(".")[0]
165
+ is_succ = motion["is_succ_all"][b]
166
+ root_states = torch.from_numpy(motion["root_state_all"][b]).to(device)
167
+ dof_states = torch.from_numpy(motion["dof_state_all"][b]).to(device)
168
+
169
+ # preprocess the text annotations
170
+ raw_text_anns = motion["frame_labels_all"][b]
171
+ for ann in raw_text_anns:
172
+ ann['start_f'] = int(ann['start_t'] * fps)
173
+ ann['end_f'] = int(ann['end_t'] * fps)
174
+
175
+ frame_labels = ["none"] * motion_length
176
+ for ann in raw_text_anns:
177
+ for f in range(ann['start_f'], min(ann['end_f'], motion_length)):
178
+ frame_labels[f] = ann['proc_label']
179
+
180
+ for t in range(motion_length):
181
+ motion_time = time_step % motion_length
182
+
183
+ if args.show_axis:
184
+ gym.clear_lines(viewer)
185
+
186
+ gym.set_actor_root_state_tensor(sim, gymtorch.unwrap_tensor(root_states[motion_time:motion_time + 1]))
187
+ gym.set_dof_state_tensor(sim, gymtorch.unwrap_tensor(dof_states[motion_time]))
188
+
189
+ gym.simulate(sim)
190
+ gym.fetch_results(sim, True)
191
+ gym.step_graphics(sim)
192
+ gym.draw_viewer(viewer, sim, True)
193
+ gym.sync_frame_time(sim)
194
+
195
+ time_step += 1
196
+ print("\r" + " " * 200, end="")
197
+ print(f"\rMotion {b + 1}/{batch_size}, Name: {motion_name}, is_succ: {is_succ}, frame {t + 1}/{motion_length}, text: {frame_labels[t]}", end="")
198
+
199
+ import time; time.sleep(1.0 / fps)
200
+
201
+ gym.destroy_viewer(viewer)
202
+ gym.destroy_sim(sim)