Hang917 commited on
Commit
5428cb0
·
1 Parent(s): 36374c2

feat: initial code + dataset

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README_dmcontrol_collect.md +87 -0
  2. __pycache__/dataset.cpython-310.pyc +0 -0
  3. dataset.py +48 -0
  4. dataset/sb3_cheetah_run_ckpt001_2025-08-08_01-32-13.npz +3 -0
  5. dataset/sb3_cheetah_run_ckpt001_2025-08-08_01-32-13_metadata.pkl +0 -0
  6. dataset/sb3_cheetah_run_ckpt010_2025-08-08_01-32-52.npz +3 -0
  7. dataset/sb3_cheetah_run_ckpt010_2025-08-08_01-32-52_metadata.pkl +0 -0
  8. dataset/sb3_cheetah_run_ckpt020_2025-08-08_01-33-31.npz +3 -0
  9. dataset/sb3_cheetah_run_ckpt020_2025-08-08_01-33-31_metadata.pkl +0 -0
  10. dataset/sb3_cheetah_run_ckpt030_2025-08-08_01-34-10.npz +3 -0
  11. dataset/sb3_cheetah_run_ckpt030_2025-08-08_01-34-10_metadata.pkl +0 -0
  12. dataset/sb3_cheetah_run_ckpt040_2025-08-08_01-34-50.npz +3 -0
  13. dataset/sb3_cheetah_run_ckpt040_2025-08-08_01-34-50_metadata.pkl +0 -0
  14. dataset/sb3_cheetah_run_ckpt050_2025-08-08_01-35-40.npz +3 -0
  15. dataset/sb3_cheetah_run_ckpt050_2025-08-08_01-35-40_metadata.pkl +0 -0
  16. dmcontrol_collect.py +294 -0
  17. sb3_collect.py +312 -0
  18. train_sb3_dmcontrol.py +203 -0
  19. weights/cheetah/run/ckpt-1.pt +3 -0
  20. weights/cheetah/run/ckpt-10.pt +3 -0
  21. weights/cheetah/run/ckpt-11.pt +3 -0
  22. weights/cheetah/run/ckpt-12.pt +3 -0
  23. weights/cheetah/run/ckpt-13.pt +3 -0
  24. weights/cheetah/run/ckpt-14.pt +3 -0
  25. weights/cheetah/run/ckpt-15.pt +3 -0
  26. weights/cheetah/run/ckpt-16.pt +3 -0
  27. weights/cheetah/run/ckpt-17.pt +3 -0
  28. weights/cheetah/run/ckpt-18.pt +3 -0
  29. weights/cheetah/run/ckpt-19.pt +3 -0
  30. weights/cheetah/run/ckpt-2.pt +3 -0
  31. weights/cheetah/run/ckpt-20.pt +3 -0
  32. weights/cheetah/run/ckpt-21.pt +3 -0
  33. weights/cheetah/run/ckpt-22.pt +3 -0
  34. weights/cheetah/run/ckpt-23.pt +3 -0
  35. weights/cheetah/run/ckpt-24.pt +3 -0
  36. weights/cheetah/run/ckpt-25.pt +3 -0
  37. weights/cheetah/run/ckpt-26.pt +3 -0
  38. weights/cheetah/run/ckpt-27.pt +3 -0
  39. weights/cheetah/run/ckpt-28.pt +3 -0
  40. weights/cheetah/run/ckpt-29.pt +3 -0
  41. weights/cheetah/run/ckpt-3.pt +3 -0
  42. weights/cheetah/run/ckpt-30.pt +3 -0
  43. weights/cheetah/run/ckpt-31.pt +3 -0
  44. weights/cheetah/run/ckpt-32.pt +3 -0
  45. weights/cheetah/run/ckpt-33.pt +3 -0
  46. weights/cheetah/run/ckpt-34.pt +3 -0
  47. weights/cheetah/run/ckpt-35.pt +3 -0
  48. weights/cheetah/run/ckpt-36.pt +3 -0
  49. weights/cheetah/run/ckpt-37.pt +3 -0
  50. weights/cheetah/run/ckpt-38.pt +3 -0
README_dmcontrol_collect.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## dm_control data collection (dmcontrol_collect.py)
2
+
3
+ ### Overview
4
+ This script collects trajectories from DeepMind Control (dm_control) environments using uniformly sampled torque actions in [-1, 1]. Data are saved with `TrajectoryBuffer` as compressed `.npz` along with a metadata `.pkl`.
5
+
6
+ Collected state per step contains (in order):
7
+ - joint angles (radians)
8
+ - joint angular velocities (rad/s)
9
+ - root position (x, y, z)
10
+ - root linear velocity (vx, vy, vz)
11
+ - root rotation quaternion (qx, qy, qz, qw)
12
+ - root angular velocity (wx, wy, wz)
13
+ - last applied torque (action vector)
14
+
15
+ ### Requirements
16
+ - Python 3.9+
17
+ - dm_control and MuJoCo installed:
18
+ ```bash
19
+ pip install dm-control mujoco
20
+ ```
21
+
22
+ ### Hyperparameters (CLI)
23
+
24
+ | Name | Type / Default | Description |
25
+ |------|-----------------|-------------|
26
+ | `--domain` | str, default `quadruped` | dm_control domain name, e.g. `quadruped`, `cheetah`. |
27
+ | `--task` | str, default `walk` | dm_control task name, e.g. `walk`, `run`. |
28
+ | `--seed` | int, default `0` | PRNG seed used for env and action sampling. |
29
+ | `--trajectories_per_file` | int, default `512` | Number of trajectories to collect and save in one output file. |
30
+ | `--steps_per_trajectory` | int, default `48` | Number of steps per trajectory segment saved to the dataset. |
31
+ | `--out_dir` | str, default `/home/lau/sim/DynaTraj/dataset` | Directory to store output `.npz` and metadata `.pkl`. |
32
+ | `--render` | flag (bool), default `False` | If set, render frames during collection (tries OpenCV, then matplotlib). |
33
+
34
+ Notes:
35
+ - Actions are sampled i.i.d. uniformly from [-1, 1] each step and treated as torques.
36
+ - If the model uses a free base, the root quaternion is output as `(x, y, z, w)`.
37
+
38
+ ### Output format
39
+ - Dataset file: `dmcontrol_{domain}_{task}_seed{seed}_{timestamp}.npz`
40
+ - Metadata file: `dmcontrol_{domain}_{task}_seed{seed}_{timestamp}_metadata.pkl`
41
+
42
+ `npz` keys (all stored by `TrajectoryBuffer`):
43
+ - `obs`: shape `[N, B, T, D_obs]`
44
+ - `ext_obs`: shape `[N, B, T, D_obs]` (same content as `obs` in this script)
45
+ - `action`: shape `[N, B, T, D_act]`
46
+ - `reward`: shape `[N, B, T]`
47
+ - `done`: shape `[N, B, T]`
48
+
49
+ Where:
50
+ - `N` = number of trajectory segments (equals `trajectories_per_file` for `B=1`)
51
+ - `B` = batch size (this script uses `B=1`)
52
+ - `T` = `steps_per_trajectory`
53
+ - `D_obs` = state dimension described above
54
+ - `D_act` = action dimension from the environment action spec
55
+
56
+ The metadata `.pkl` contains: domain, task, seed, counts, action bounds, timestamp, and `render` flag.
57
+
58
+ ### Examples
59
+ - Quadruped walk (default):
60
+ ```bash
61
+ python /home/lau/sim/DynaTraj/dmcontrol_collect.py
62
+ ```
63
+ - Cheetah run (planar cheetah):
64
+ ```bash
65
+ python /home/lau/sim/DynaTraj/dmcontrol_collect.py --domain cheetah --task run --seed 1 --trajectories_per_file 512 --steps_per_trajectory 48 --out_dir /home/lau/sim/DynaTraj/dataset
66
+ ```
67
+ - With rendering (requires OpenCV or matplotlib):
68
+ ```bash
69
+ python /home/lau/sim/DynaTraj/dmcontrol_collect.py --domain quadruped --task walk --render
70
+ ```
71
+
72
+ ### Tips
73
+ - Rendering slows down collection; disable `--render` when collecting large datasets.
74
+ - If a task terminates early, the script resets automatically and continues until it reaches the requested number of trajectories.
75
+ - Ensure MuJoCo is set up properly in your environment if dm_control fails to import.
76
+
77
+
78
+ python /home/lau/sim/DynaTraj/sb3_collect.py \
79
+ --domain cheetah --task run \
80
+ --algo SAC \
81
+ --ckpt_root /home/lau/sim/DynaTraj/weights \
82
+ --ckpt_indices 1,10,20,30,40,50 \
83
+ --trajectories_per_ckpt 5120 \
84
+ --steps_per_trajectory 24 \
85
+ --out_dir /home/lau/sim/DynaTraj/dataset \
86
+ --device cpu \
87
+ --render
__pycache__/dataset.cpython-310.pyc ADDED
Binary file (1.99 kB). View file
 
dataset.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ from collections import defaultdict
3
+
4
+ class TrajectoryBuffer:
5
+
6
+
7
+ def __init__(self, traj_steps):
8
+ self.traj_steps = traj_steps
9
+
10
+ self.step_idx = 0
11
+ self.buffers = defaultdict(list)
12
+ self.traj_pool = defaultdict(list)
13
+ self.batch_size = None
14
+
15
+ def append_step(self, obs, ext_obs,action, reward, done):
16
+ """
17
+ obs : [B, …]
18
+ action : [B, …]
19
+ reward : [B]
20
+ done : [B]
21
+ """
22
+ if self.batch_size is None:
23
+ self.batch_size = obs.shape[0]
24
+ self.buffers["obs"].append(obs.copy())
25
+ self.buffers["action"].append(action.copy())
26
+ self.buffers["reward"].append(reward.copy())
27
+ self.buffers["done"].append(done.copy())
28
+ self.buffers["ext_obs"].append(ext_obs.copy())
29
+
30
+ self.step_idx += 1
31
+
32
+ if self.step_idx % self.traj_steps == 0:
33
+ for k, lst in self.buffers.items():
34
+ traj_segment = np.stack(lst, axis=1)
35
+ self.traj_pool[k].append(traj_segment)
36
+ lst.clear()
37
+
38
+ def finalize(self):
39
+ return {k: np.stack(v, axis=0) for k, v in self.traj_pool.items()}
40
+
41
+ def save(self, path):
42
+ np.savez_compressed(path, **self.finalize())
43
+
44
+ def __len__(self):
45
+ if not self.traj_pool or self.batch_size is None:
46
+ return 0
47
+ flushes = len(next(iter(self.traj_pool.values())))
48
+ return flushes * self.batch_size
dataset/sb3_cheetah_run_ckpt001_2025-08-08_01-32-13.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e9ff4c767057386f7c39e45ed4ae845aead4295ae335d075899fb1251f01e93
3
+ size 25049522
dataset/sb3_cheetah_run_ckpt001_2025-08-08_01-32-13_metadata.pkl ADDED
Binary file (190 Bytes). View file
 
dataset/sb3_cheetah_run_ckpt010_2025-08-08_01-32-52.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30a377139a84ec00c25991d88786f5888f87d67d04d7dd7b0b3a7884bdb817d7
3
+ size 25287441
dataset/sb3_cheetah_run_ckpt010_2025-08-08_01-32-52_metadata.pkl ADDED
Binary file (190 Bytes). View file
 
dataset/sb3_cheetah_run_ckpt020_2025-08-08_01-33-31.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:233ceb0c27ed9e88d10b98123170d2da3e19044be8d11bcd3b17df54e3a730a2
3
+ size 25215285
dataset/sb3_cheetah_run_ckpt020_2025-08-08_01-33-31_metadata.pkl ADDED
Binary file (190 Bytes). View file
 
dataset/sb3_cheetah_run_ckpt030_2025-08-08_01-34-10.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce9a474990a0f216bf61b172523bea7f918c66fc98721c393e4284a8632185d5
3
+ size 25393126
dataset/sb3_cheetah_run_ckpt030_2025-08-08_01-34-10_metadata.pkl ADDED
Binary file (190 Bytes). View file
 
dataset/sb3_cheetah_run_ckpt040_2025-08-08_01-34-50.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f58d03f3fb03b927d28d9839c6cfe1fc16a216456dc6ab5df7f5743e66a9250
3
+ size 25368383
dataset/sb3_cheetah_run_ckpt040_2025-08-08_01-34-50_metadata.pkl ADDED
Binary file (190 Bytes). View file
 
dataset/sb3_cheetah_run_ckpt050_2025-08-08_01-35-40.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1dc13467e61a171ce0aacf16b01a7275ef6f7c57794fc99c5092f0db313bba52
3
+ size 25363130
dataset/sb3_cheetah_run_ckpt050_2025-08-08_01-35-40_metadata.pkl ADDED
Binary file (190 Bytes). View file
 
dmcontrol_collect.py ADDED
@@ -0,0 +1,294 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import os
3
+ import time
4
+ from datetime import datetime
5
+
6
+ import numpy as np
7
+ from tqdm import tqdm
8
+
9
+ from dataset import TrajectoryBuffer
10
+
11
+ # dm_control imports
12
+ try:
13
+ from dm_control import suite
14
+ except Exception as e:
15
+ raise RuntimeError(
16
+ "dm_control is required. Install via: pip install dm-control mujoco"
17
+ ) from e
18
+
19
+
20
+ class _RenderHelper:
21
+ def __init__(self):
22
+ self.backend = None
23
+ self._warned = False
24
+ self._cv2 = None
25
+ self._plt = None
26
+ self._fig = None
27
+ self._ax = None
28
+ self._im = None
29
+ try:
30
+ import cv2 # type: ignore
31
+
32
+ self._cv2 = cv2
33
+ self.backend = "cv2"
34
+ except Exception:
35
+ try:
36
+ import matplotlib.pyplot as plt # type: ignore
37
+
38
+ self._plt = plt
39
+ self.backend = "mpl"
40
+ self._fig, self._ax = plt.subplots()
41
+ self._im = None
42
+ plt.ion()
43
+ except Exception:
44
+ self.backend = None
45
+
46
+ def show(self, rgb: np.ndarray):
47
+ if self.backend == "cv2" and self._cv2 is not None:
48
+ bgr = rgb[..., ::-1]
49
+ self._cv2.imshow("dmcontrol", bgr)
50
+ self._cv2.waitKey(1)
51
+ elif self.backend == "mpl" and self._plt is not None:
52
+ if self._im is None:
53
+ self._im = self._ax.imshow(rgb)
54
+ self._ax.axis("off")
55
+ else:
56
+ self._im.set_data(rgb)
57
+ self._plt.pause(0.001)
58
+ else:
59
+ if not self._warned:
60
+ print("[WARN] Rendering requested but no display backend found (cv2/matplotlib). Skipping render.")
61
+ self._warned = True
62
+
63
+ def close(self):
64
+ if self.backend == "cv2" and self._cv2 is not None:
65
+ self._cv2.destroyAllWindows()
66
+ elif self.backend == "mpl" and self._plt is not None and self._fig is not None:
67
+ self._plt.close(self._fig)
68
+
69
+
70
+ def build_state_from_physics(physics: "suite.Environment.physics", last_action: np.ndarray) -> np.ndarray:
71
+ """
72
+ Build the state vector from MuJoCo physics and the last applied action (torque).
73
+
74
+ State contains, in order:
75
+ - joint angles (radians)
76
+ - joint angular velocities (rad/s)
77
+ - root position (x, y, z)
78
+ - root linear velocity (vx, vy, vz)
79
+ - root rotation quaternion (qx, qy, qz, qw)
80
+ - root angular velocity (wx, wy, wz)
81
+ - last torque applied (per actuator)
82
+ """
83
+ # Copy to avoid referencing MuJoCo buffers
84
+ qpos = np.array(physics.data.qpos, dtype=np.float32).copy()
85
+ qvel = np.array(physics.data.qvel, dtype=np.float32).copy()
86
+
87
+ # Assume floating base with free joint at the beginning (most 3D locomotion models)
88
+ # qpos: [x, y, z, qw, qx, qy, qz, joint_angles...]
89
+ # qvel: [vx, vy, vz, wx, wy, wz, joint_velocities...]
90
+ if qpos.shape[0] >= 7 and qvel.shape[0] >= 6:
91
+ root_pos = qpos[0:3]
92
+ # Reorder quaternion from (w, x, y, z) to (x, y, z, w)
93
+ qwxyz = qpos[3:7]
94
+ root_quat = np.array([qwxyz[1], qwxyz[2], qwxyz[3], qwxyz[0]], dtype=np.float32)
95
+ root_lin_vel = qvel[0:3]
96
+ root_ang_vel = qvel[3:6]
97
+ joint_angles = qpos[7:]
98
+ joint_vels = qvel[6:]
99
+ else:
100
+ # Fallback for planar / non-free base models: no 3D root state
101
+ root_pos = np.zeros(3, dtype=np.float32)
102
+ root_quat = np.array([0.0, 0.0, 0.0, 1.0], dtype=np.float32)
103
+ root_lin_vel = np.zeros(3, dtype=np.float32)
104
+ root_ang_vel = np.zeros(3, dtype=np.float32)
105
+ joint_angles = qpos.astype(np.float32)
106
+ joint_vels = qvel.astype(np.float32)
107
+
108
+ state_parts = [
109
+ joint_angles.astype(np.float32),
110
+ joint_vels.astype(np.float32),
111
+ root_pos.astype(np.float32),
112
+ root_lin_vel.astype(np.float32),
113
+ root_quat.astype(np.float32),
114
+ root_ang_vel.astype(np.float32),
115
+ last_action.astype(np.float32),
116
+ ]
117
+ return np.concatenate(state_parts, dtype=np.float32)
118
+
119
+
120
+ essential_hparams = dict(
121
+ trajectories_per_file=512,
122
+ steps_per_trajectory=48,
123
+ )
124
+
125
+
126
+ def collect_dmcontrol(
127
+ domain: str,
128
+ task: str,
129
+ seed: int,
130
+ trajectories_per_file: int,
131
+ steps_per_trajectory: int,
132
+ out_dir: str,
133
+ render: bool = False,
134
+ ):
135
+ rng = np.random.RandomState(seed)
136
+
137
+ # Load environment
138
+ env = suite.load(
139
+ domain_name=domain,
140
+ task_name=task,
141
+ task_kwargs={"random": seed},
142
+ environment_kwargs={"flat_observation": False},
143
+ )
144
+
145
+ action_spec = env.action_spec()
146
+ if action_spec.minimum is None or action_spec.maximum is None:
147
+ # Default to [-1, 1] if not specified (should be present in DMC)
148
+ action_low = -np.ones(action_spec.shape, dtype=np.float32)
149
+ action_high = np.ones(action_spec.shape, dtype=np.float32)
150
+ else:
151
+ action_low = np.asarray(action_spec.minimum, dtype=np.float32)
152
+ action_high = np.asarray(action_spec.maximum, dtype=np.float32)
153
+
154
+ # Prepare output directory
155
+ os.makedirs(out_dir, exist_ok=True)
156
+
157
+ # Create buffer
158
+ buffer = TrajectoryBuffer(steps_per_trajectory)
159
+
160
+ # Optional renderer
161
+ viewer = _RenderHelper() if render else None
162
+
163
+ # Reset env
164
+ ts = env.reset()
165
+ prev_action = np.zeros(action_spec.shape, dtype=np.float32)
166
+
167
+ # Progress
168
+ pbar = tqdm(total=trajectories_per_file, desc=f"Collecting {domain}/{task}")
169
+
170
+ # Main loop until we fill the required number of trajectories
171
+ while len(buffer) < trajectories_per_file:
172
+ # Build current state from physics and last applied torque
173
+ state = build_state_from_physics(env.physics, prev_action)
174
+
175
+ # Reward / done from current timestep
176
+ reward = 0.0 if ts.reward is None else float(ts.reward)
177
+ done = bool(ts.last())
178
+
179
+ # Prepare batch dimension B=1
180
+ obs_np = state[None, :]
181
+ ext_obs_np = obs_np # store the same as ext_obs for convenience
182
+ action_np = prev_action[None, :]
183
+ reward_np = np.array([reward], dtype=np.float32)
184
+ done_np = np.array([done], dtype=np.bool_)
185
+
186
+ # Append to buffer
187
+ buffer.append_step(obs_np, ext_obs_np, action_np, reward_np, done_np)
188
+
189
+ # Sample next action uniformly in [-1, 1]
190
+ action = rng.uniform(low=-1, high=1, size=action_spec.shape).astype(np.float32)
191
+
192
+ # Step the environment
193
+ ts = env.step(action)
194
+
195
+ # Render current frame if requested
196
+ if viewer is not None:
197
+ try:
198
+ frame = env.physics.render(height=480, width=640, camera_id=0)
199
+ viewer.show(frame)
200
+ except Exception as _:
201
+ # Suppress rendering errors to not break collection
202
+ pass
203
+
204
+ # Update last action (torque) for next state build
205
+ prev_action = action
206
+
207
+ # Handle episode termination
208
+ if ts.last():
209
+ ts = env.reset()
210
+ prev_action = np.zeros_like(prev_action)
211
+
212
+ # Update progress
213
+ pbar.n = len(buffer)
214
+ pbar.refresh()
215
+
216
+ pbar.close()
217
+
218
+ if viewer is not None:
219
+ viewer.close()
220
+
221
+ # Save dataset
222
+ timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
223
+ file_stem = f"dmcontrol_{domain}_{task}_seed{seed}_{timestamp}"
224
+ dataset_path = os.path.join(out_dir, f"{file_stem}.npz")
225
+ buffer.save(dataset_path)
226
+
227
+ # Save metadata
228
+ metadata = {
229
+ "domain": domain,
230
+ "task": task,
231
+ "seed": seed,
232
+ "num_trajectories": len(buffer),
233
+ "steps_per_trajectory": steps_per_trajectory,
234
+ "total_steps": int(len(buffer) * steps_per_trajectory),
235
+ "action_low": action_low.tolist(),
236
+ "action_high": action_high.tolist(),
237
+ "collected_at": timestamp,
238
+ "render": bool(render),
239
+ }
240
+ import pickle
241
+
242
+ metadata_path = os.path.join(out_dir, f"{file_stem}_metadata.pkl")
243
+ with open(metadata_path, "wb") as f:
244
+ pickle.dump(metadata, f)
245
+
246
+ print(f"[INFO] Saved {len(buffer)} trajectories to {dataset_path}")
247
+ print(f"[INFO] Saved metadata to {metadata_path}")
248
+
249
+
250
+ def parse_args():
251
+ parser = argparse.ArgumentParser(description="Collect dm_control data with random torque actions")
252
+ parser.add_argument("--domain", type=str, default="quadruped", help="dm_control domain name (e.g., quadruped, cheetah)")
253
+ parser.add_argument("--task", type=str, default="walk", help="dm_control task name (e.g., walk, run)")
254
+ parser.add_argument("--seed", type=int, default=0, help="Random seed")
255
+ parser.add_argument("--trajectories_per_file", type=int, default=essential_hparams["trajectories_per_file"], help="Number of trajectories to collect per output file")
256
+ parser.add_argument("--steps_per_trajectory", type=int, default=essential_hparams["steps_per_trajectory"], help="Number of steps per trajectory")
257
+ parser.add_argument(
258
+ "--out_dir",
259
+ type=str,
260
+ default=os.path.join("/home/lau/sim/DynaTraj", "dataset"),
261
+ help="Output directory to store datasets",
262
+ )
263
+ parser.add_argument(
264
+ "--render",
265
+ action="store_true",
266
+ help="If set, render frames during collection (requires cv2 or matplotlib)",
267
+ )
268
+ return parser.parse_args()
269
+
270
+
271
+ if __name__ == "__main__":
272
+ args = parse_args()
273
+
274
+ # Basic hyperparameter echo
275
+ print("[INFO] Hyperparameters:")
276
+ print(f" domain/task: {args.domain}/{args.task}")
277
+ print(f" seed: {args.seed}")
278
+ print(f" trajectories_per_file: {args.trajectories_per_file}")
279
+ print(f" steps_per_trajectory: {args.steps_per_trajectory}")
280
+ print(f" out_dir: {args.out_dir}")
281
+ print(f" render: {args.render}")
282
+
283
+ start = time.time()
284
+ collect_dmcontrol(
285
+ domain=args.domain,
286
+ task=args.task,
287
+ seed=args.seed,
288
+ trajectories_per_file=args.trajectories_per_file,
289
+ steps_per_trajectory=args.steps_per_trajectory,
290
+ out_dir=args.out_dir,
291
+ render=args.render,
292
+ )
293
+ elapsed = time.time() - start
294
+ print(f"[INFO] Done in {elapsed:.1f}s")
sb3_collect.py ADDED
@@ -0,0 +1,312 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import glob
3
+ import os
4
+ from datetime import datetime
5
+ from typing import Dict, List, Tuple
6
+
7
+ import numpy as np
8
+ from tqdm import tqdm
9
+
10
+ import torch
11
+
12
+ from dataset import TrajectoryBuffer
13
+
14
+ # dm_control
15
+ try:
16
+ from dm_control import suite
17
+ except Exception as e:
18
+ raise RuntimeError(
19
+ "dm_control is required. Install via: pip install dm-control mujoco"
20
+ ) from e
21
+
22
+ # Stable Baselines3
23
+ try:
24
+ from stable_baselines3 import SAC, PPO, TD3
25
+ from stable_baselines3.common.vec_env import DummyVecEnv
26
+ except Exception as e:
27
+ raise RuntimeError(
28
+ "stable-baselines3 is required. Install via: pip install stable-baselines3"
29
+ ) from e
30
+
31
+ ALGOS = {
32
+ "SAC": SAC,
33
+ "PPO": PPO,
34
+ "TD3": TD3,
35
+ }
36
+
37
+
38
+ class _RenderHelper:
39
+ def __init__(self):
40
+ self.backend = None
41
+ self._warned = False
42
+ self._cv2 = None
43
+ self._plt = None
44
+ self._fig = None
45
+ self._ax = None
46
+ self._im = None
47
+ try:
48
+ import cv2 # type: ignore
49
+
50
+ self._cv2 = cv2
51
+ self.backend = "cv2"
52
+ except Exception:
53
+ try:
54
+ import matplotlib.pyplot as plt # type: ignore
55
+
56
+ self._plt = plt
57
+ self.backend = "mpl"
58
+ self._fig, self._ax = plt.subplots()
59
+ self._im = None
60
+ plt.ion()
61
+ except Exception:
62
+ self.backend = None
63
+
64
+ def show(self, rgb: np.ndarray):
65
+ if self.backend == "cv2" and self._cv2 is not None:
66
+ bgr = rgb[..., ::-1]
67
+ self._cv2.imshow("sb3_collect", bgr)
68
+ self._cv2.waitKey(1)
69
+ elif self.backend == "mpl" and self._plt is not None:
70
+ if self._im is None:
71
+ self._im = self._ax.imshow(rgb)
72
+ self._ax.axis("off")
73
+ else:
74
+ self._im.set_data(rgb)
75
+ self._plt.pause(0.001)
76
+ else:
77
+ if not self._warned:
78
+ print("[WARN] Rendering requested but no display backend found (cv2/matplotlib). Skipping render.")
79
+ self._warned = True
80
+
81
+ def close(self):
82
+ if self.backend == "cv2" and self._cv2 is not None:
83
+ self._cv2.destroyAllWindows()
84
+ elif self.backend == "mpl" and self._plt is not None and self._fig is not None:
85
+ self._plt.close(self._fig)
86
+
87
+
88
+ # --------- Helpers ---------
89
+
90
+ def flatten_env_observation(obs_dict: Dict[str, np.ndarray]) -> Tuple[np.ndarray, List[str]]:
91
+ keys = sorted(obs_dict.keys())
92
+ parts = [np.asarray(obs_dict[k], dtype=np.float32).ravel() for k in keys]
93
+ return (np.concatenate(parts, axis=0).astype(np.float32), keys)
94
+
95
+
96
+ def flatten_obs_with_keys(obs_dict: Dict[str, np.ndarray], keys: List[str]) -> np.ndarray:
97
+ parts = [np.asarray(obs_dict[k], dtype=np.float32).ravel() for k in keys]
98
+ return np.concatenate(parts, axis=0).astype(np.float32)
99
+
100
+
101
+ def build_state_from_physics(physics: "suite.Environment.physics", last_action: np.ndarray) -> np.ndarray:
102
+ qpos = np.array(physics.data.qpos, dtype=np.float32).copy()
103
+ qvel = np.array(physics.data.qvel, dtype=np.float32).copy()
104
+ if qpos.shape[0] >= 7 and qvel.shape[0] >= 6:
105
+ root_pos = qpos[0:3]
106
+ qwxyz = qpos[3:7]
107
+ root_quat = np.array([qwxyz[1], qwxyz[2], qwxyz[3], qwxyz[0]], dtype=np.float32)
108
+ root_lin_vel = qvel[0:3]
109
+ root_ang_vel = qvel[3:6]
110
+ joint_angles = qpos[7:]
111
+ joint_vels = qvel[6:]
112
+ else:
113
+ root_pos = np.zeros(3, dtype=np.float32)
114
+ root_quat = np.array([0.0, 0.0, 0.0, 1.0], dtype=np.float32)
115
+ root_lin_vel = np.zeros(3, dtype=np.float32)
116
+ root_ang_vel = np.zeros(3, dtype=np.float32)
117
+ joint_angles = qpos.astype(np.float32)
118
+ joint_vels = qvel.astype(np.float32)
119
+ state_parts = [
120
+ joint_angles.astype(np.float32),
121
+ joint_vels.astype(np.float32),
122
+ root_pos.astype(np.float32),
123
+ root_lin_vel.astype(np.float32),
124
+ root_quat.astype(np.float32),
125
+ root_ang_vel.astype(np.float32),
126
+ last_action.astype(np.float32),
127
+ ]
128
+ return np.concatenate(state_parts, dtype=np.float32)
129
+
130
+
131
+ def load_sb3_policy_for_inference(algo_name: str, domain: str, task: str, device: str = "cpu"):
132
+ # Create a tiny dummy env to instantiate policy with correct spaces
133
+ def _make_env():
134
+ env = suite.load(domain_name=domain, task_name=task, task_kwargs={"random": 0})
135
+ # Build observation size from first reset
136
+ obs0, obs_keys = flatten_env_observation(env.reset().observation)
137
+ action_spec = env.action_spec()
138
+ act_low = np.asarray(action_spec.minimum, dtype=np.float32)
139
+ act_high = np.asarray(action_spec.maximum, dtype=np.float32)
140
+ # Create a dummy Gym space via sb3 internals by wrapping DummyVecEnv
141
+ # We will instantiate the model with a lambda that returns an object with the same spaces
142
+ import gymnasium as gym
143
+ from gymnasium import spaces
144
+
145
+ class DummySpaceEnv(gym.Env):
146
+ def __init__(self):
147
+ self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape=(obs0.shape[0],), dtype=np.float32)
148
+ self.action_space = spaces.Box(low=act_low, high=act_high, shape=action_spec.shape, dtype=np.float32)
149
+ def reset(self, *, seed=None, options=None):
150
+ return np.zeros_like(obs0), {}
151
+ def step(self, action):
152
+ return np.zeros_like(obs0), 0.0, True, False, {}
153
+
154
+ vec_env = DummyVecEnv([lambda: DummySpaceEnv()])
155
+ return vec_env
156
+
157
+ ALGO = ALGOS[algo_name]
158
+ vec_env = _make_env()
159
+ model = ALGO("MlpPolicy", vec_env, verbose=0, device=device)
160
+ model.policy.to(device)
161
+ model.policy.eval()
162
+ return model
163
+
164
+
165
+ def collect_with_checkpoint(env, action_spec, model, target_trajectories: int, steps_per_traj: int, buffer: TrajectoryBuffer, pbar: tqdm, obs_keys: List[str], viewer: _RenderHelper | None):
166
+ # Reset env and local counters
167
+ ts = env.reset()
168
+ prev_action = np.zeros(action_spec.shape, dtype=np.float32)
169
+ start_len = len(buffer)
170
+
171
+ while (len(buffer) - start_len) < target_trajectories:
172
+ # Build state for dataset
173
+ state = build_state_from_physics(env.physics, prev_action)
174
+ reward = 0.0 if ts.reward is None else float(ts.reward)
175
+ done = bool(ts.last())
176
+
177
+ # Append current step (B=1)
178
+ obs_np = state[None, :]
179
+ ext_obs_np = obs_np
180
+ action_np = prev_action[None, :]
181
+ reward_np = np.array([reward], dtype=np.float32)
182
+ done_np = np.array([done], dtype=np.bool_)
183
+ buffer.append_step(obs_np, ext_obs_np, action_np, reward_np, done_np)
184
+
185
+ # Policy action from flattened env observation
186
+ flat_obs = flatten_obs_with_keys(ts.observation, obs_keys)
187
+ action, _ = model.predict(flat_obs, deterministic=True)
188
+ action = np.asarray(action, dtype=np.float32).reshape(action_spec.shape)
189
+ # Clip to env action bounds
190
+ low = np.asarray(action_spec.minimum, dtype=np.float32)
191
+ high = np.asarray(action_spec.maximum, dtype=np.float32)
192
+ action = np.clip(action, low, high)
193
+
194
+ # Step env
195
+ ts = env.step(action)
196
+ prev_action = action
197
+
198
+ # Render
199
+ if viewer is not None:
200
+ try:
201
+ frame = env.physics.render(height=480, width=640, camera_id=0)
202
+ viewer.show(frame)
203
+ except Exception:
204
+ pass
205
+
206
+ if ts.last():
207
+ ts = env.reset()
208
+ prev_action = np.zeros_like(prev_action)
209
+
210
+ # Update progress bar to reflect number of completed trajectories in buffer
211
+ pbar.n = len(buffer)
212
+ pbar.refresh()
213
+
214
+
215
+ # --------- Main pipeline ---------
216
+
217
+ def parse_args():
218
+ parser = argparse.ArgumentParser(description="Collect dm_control dataset using specified SB3 checkpoints, one npz per ckpt")
219
+ parser.add_argument("--domain", type=str, default="cheetah")
220
+ parser.add_argument("--task", type=str, default="run")
221
+ parser.add_argument("--algo", type=str, choices=["SAC", "PPO", "TD3"], default="SAC")
222
+ parser.add_argument("--seed", type=int, default=0)
223
+ parser.add_argument("--ckpt_root", type=str, default=os.path.join("/home/lau/sim/DynaTraj", "weights"))
224
+ parser.add_argument("--ckpt_indices", type=str, required=True, help="Comma-separated list of checkpoint indices, e.g., 0,10,30,40,50")
225
+ parser.add_argument("--trajectories_per_ckpt", type=int, default=5120)
226
+ parser.add_argument("--steps_per_trajectory", type=int, default=24)
227
+ parser.add_argument("--out_dir", type=str, default=os.path.join("/home/lau/sim/DynaTraj", "dataset"))
228
+ parser.add_argument("--device", type=str, default="cpu")
229
+ parser.add_argument("--render", action="store_true")
230
+ return parser.parse_args()
231
+
232
+
233
+ def main():
234
+ args = parse_args()
235
+
236
+ # Prepare env
237
+ env = suite.load(domain_name=args.domain, task_name=args.task, task_kwargs={"random": args.seed})
238
+ action_spec = env.action_spec()
239
+
240
+ # Determine obs flatten order once
241
+ ts0 = env.reset()
242
+ _, obs_keys = flatten_env_observation(ts0.observation)
243
+
244
+ # Parse checkpoint indices
245
+ try:
246
+ indices = [int(x.strip()) for x in args.ckpt_indices.split(",") if x.strip() != ""]
247
+ except Exception:
248
+ raise ValueError("Invalid --ckpt_indices. Provide comma-separated integers, e.g., 0,10,30")
249
+
250
+ ckpt_dir = os.path.join(args.ckpt_root, args.domain, args.task)
251
+ if not os.path.isdir(ckpt_dir):
252
+ raise FileNotFoundError(f"Checkpoint directory not found: {ckpt_dir}")
253
+
254
+ os.makedirs(args.out_dir, exist_ok=True)
255
+
256
+ viewer = _RenderHelper() if args.render else None
257
+
258
+ for idx in indices:
259
+ ckpt_path = os.path.join(ckpt_dir, f"ckpt-{idx}.pt")
260
+ if not os.path.isfile(ckpt_path):
261
+ raise FileNotFoundError(f"Checkpoint not found: {ckpt_path}")
262
+
263
+ payload = torch.load(ckpt_path, map_location=args.device)
264
+ algo_name = payload.get("algo", args.algo)
265
+ if algo_name not in ALGOS:
266
+ raise ValueError(f"Unsupported algo in checkpoint {ckpt_path}: {algo_name}")
267
+ state_dict = payload.get("policy_state_dict", None)
268
+ if state_dict is None:
269
+ raise RuntimeError(f"policy_state_dict not found in {ckpt_path}")
270
+
271
+ # Build model and load policy weights
272
+ model = load_sb3_policy_for_inference(algo_name, args.domain, args.task, device=args.device)
273
+ model.policy.load_state_dict(state_dict)
274
+ model.policy.eval()
275
+
276
+ # Collect for this ckpt
277
+ buffer = TrajectoryBuffer(args.steps_per_trajectory)
278
+ pbar = tqdm(total=args.trajectories_per_ckpt, desc=f"Collecting ckpt-{idx}")
279
+ collect_with_checkpoint(env, action_spec, model, args.trajectories_per_ckpt, args.steps_per_trajectory, buffer, pbar, obs_keys, viewer)
280
+ pbar.close()
281
+
282
+ # Save dataset for this ckpt
283
+ timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
284
+ file_stem = f"sb3_{args.domain}_{args.task}_ckpt{idx:03d}_{timestamp}"
285
+ dataset_path = os.path.join(args.out_dir, f"{file_stem}.npz")
286
+ buffer.save(dataset_path)
287
+
288
+ # Save minimal metadata
289
+ meta = {
290
+ "domain": args.domain,
291
+ "task": args.task,
292
+ "algo": args.algo,
293
+ "seed": args.seed,
294
+ "ckpt_index": idx,
295
+ "trajectories_per_ckpt": args.trajectories_per_ckpt,
296
+ "steps_per_trajectory": args.steps_per_trajectory,
297
+ "total_trajectories": len(buffer),
298
+ "total_steps": len(buffer) * args.steps_per_trajectory,
299
+ "render": bool(args.render),
300
+ }
301
+ import pickle
302
+ with open(os.path.join(args.out_dir, f"{file_stem}_metadata.pkl"), "wb") as f:
303
+ pickle.dump(meta, f)
304
+
305
+ print(f"[INFO] Saved ckpt {idx}: {dataset_path} ({len(buffer)} trajectories)")
306
+
307
+ if viewer is not None:
308
+ viewer.close()
309
+
310
+
311
+ if __name__ == "__main__":
312
+ main()
train_sb3_dmcontrol.py ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import os
3
+ from datetime import datetime
4
+ from typing import Dict, List
5
+
6
+ import numpy as np
7
+
8
+ # dm_control
9
+ try:
10
+ from dm_control import suite
11
+ except Exception as e:
12
+ raise RuntimeError(
13
+ "dm_control is required. Install via: pip install dm-control mujoco"
14
+ ) from e
15
+
16
+ # gym/gymnasium compatibility
17
+ try:
18
+ import gymnasium as gym
19
+ except Exception:
20
+ import gym # type: ignore
21
+
22
+ # Stable Baselines3
23
+ try:
24
+ from stable_baselines3 import SAC, PPO, TD3
25
+ from stable_baselines3.common.vec_env import DummyVecEnv, SubprocVecEnv
26
+ from stable_baselines3.common.env_util import make_vec_env
27
+ from stable_baselines3.common.callbacks import BaseCallback
28
+ except Exception as e:
29
+ raise RuntimeError(
30
+ "stable-baselines3 is required. Install via: pip install stable-baselines3"
31
+ ) from e
32
+
33
+ import torch
34
+
35
+
36
+ class DmControlGymWrapper(gym.Env):
37
+ """A minimal Gym/Gymnasium wrapper for dm_control suite tasks with flattened obs."""
38
+
39
+ metadata = {"render_modes": ["rgb_array"], "render_fps": 60}
40
+
41
+ def __init__(self, domain: str, task: str, seed: int | None = None):
42
+ super().__init__()
43
+ self._domain = domain
44
+ self._task = task
45
+ self._seed = seed if seed is not None else 0
46
+ self._env = suite.load(domain_name=domain, task_name=task, task_kwargs={"random": self._seed})
47
+
48
+ # Build observation space by flattening dict in sorted key order
49
+ example_obs = self._env.reset().observation
50
+ self._obs_keys = sorted(example_obs.keys())
51
+ obs_size = int(np.sum([np.asarray(example_obs[k]).size for k in self._obs_keys]))
52
+ # Use unbounded space; algorithms usually normalize internally
53
+ self.observation_space = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(obs_size,), dtype=np.float32)
54
+
55
+ # Action space from spec
56
+ action_spec = self._env.action_spec()
57
+ self._act_low = np.asarray(action_spec.minimum, dtype=np.float32)
58
+ self._act_high = np.asarray(action_spec.maximum, dtype=np.float32)
59
+ self.action_space = gym.spaces.Box(low=self._act_low, high=self._act_high, shape=action_spec.shape, dtype=np.float32)
60
+
61
+ def seed(self, seed: int | None = None):
62
+ if seed is not None:
63
+ self._seed = seed
64
+ # dm_control uses task_kwargs random; re-create env to apply new seed
65
+ self._env = suite.load(domain_name=self._domain, task_name=self._task, task_kwargs={"random": self._seed})
66
+
67
+ def _flatten_obs(self, obs_dict: Dict[str, np.ndarray]) -> np.ndarray:
68
+ parts: List[np.ndarray] = []
69
+ for k in self._obs_keys:
70
+ v = np.asarray(obs_dict[k], dtype=np.float32).ravel()
71
+ parts.append(v)
72
+ return np.concatenate(parts, axis=0).astype(np.float32)
73
+
74
+ def reset(self, *, seed: int | None = None, options: dict | None = None):
75
+ if seed is not None:
76
+ self.seed(seed)
77
+ ts = self._env.reset()
78
+ obs = self._flatten_obs(ts.observation)
79
+ info = {}
80
+ return obs, info
81
+
82
+ def step(self, action: np.ndarray):
83
+ action = np.asarray(action, dtype=np.float32)
84
+ action = np.clip(action, self._act_low, self._act_high)
85
+ ts = self._env.step(action)
86
+ obs = self._flatten_obs(ts.observation)
87
+ reward = 0.0 if ts.reward is None else float(ts.reward)
88
+ terminated = bool(ts.last())
89
+ truncated = False # dm_control provides a single 'last' flag
90
+ info = {}
91
+ if terminated:
92
+ # dm_control envs typically auto-reset; we return terminal step and let VecEnv reset
93
+ pass
94
+ return obs, reward, terminated, truncated, info
95
+
96
+ def render(self):
97
+ # Return an RGB array
98
+ return self._env.physics.render(height=480, width=640, camera_id=0)
99
+
100
+
101
+ ALGOS = {
102
+ "sac": SAC,
103
+ "ppo": PPO,
104
+ "td3": TD3,
105
+ }
106
+
107
+
108
+ class PeriodicCkptCallback(BaseCallback):
109
+ """Save policy checkpoint every fixed number of timesteps.
110
+
111
+ Saves to weights/<domain>/<task>/ckpt-<k>.pt where k starts from 1.
112
+ """
113
+
114
+ def __init__(self, save_root: str, domain: str, task: str, interval: int = 10_000, verbose: int = 1):
115
+ super().__init__(verbose)
116
+ self.save_root = save_root
117
+ self.domain = domain
118
+ self.task = task
119
+ self.interval = interval
120
+ self.saved_count = 0
121
+ self.target_dir = os.path.join(save_root, domain, task)
122
+ os.makedirs(self.target_dir, exist_ok=True)
123
+
124
+ def _on_step(self) -> bool:
125
+ # num_timesteps is global across envs; trigger exactly on multiples
126
+ if self.num_timesteps > 0 and self.num_timesteps % self.interval == 0:
127
+ self.saved_count += 1
128
+ path = os.path.join(self.target_dir, f"ckpt-{self.saved_count}.pt")
129
+ payload = {
130
+ "algo": self.model.__class__.__name__,
131
+ "domain": self.domain,
132
+ "task": self.task,
133
+ "num_timesteps": int(self.num_timesteps),
134
+ "policy_state_dict": self.model.policy.state_dict(),
135
+ }
136
+ torch.save(payload, path)
137
+ if self.verbose:
138
+ print(f"[CKPT] Saved checkpoint #{self.saved_count} at {self.num_timesteps} steps -> {path}")
139
+ return True
140
+
141
+
142
+ def train(domain: str, task: str, algo: str, total_timesteps: int, n_envs: int, seed: int, device: str, out_dir: str):
143
+ # Build vectorized envs
144
+ def make_env_fn(rank: int):
145
+ def _thunk():
146
+ env = DmControlGymWrapper(domain=domain, task=task, seed=seed + rank)
147
+ return env
148
+ return _thunk
149
+
150
+ vec_env = make_vec_env(make_env_fn(0), n_envs=n_envs, seed=seed, vec_env_cls=SubprocVecEnv if n_envs > 1 else DummyVecEnv)
151
+
152
+ ALGO_CLS = ALGOS[algo]
153
+ policy = "MlpPolicy"
154
+ model = ALGO_CLS(policy, vec_env, verbose=1, seed=seed, device=device)
155
+
156
+ # Periodic checkpoint every 10,000 steps
157
+ ckpt_cb = PeriodicCkptCallback(save_root=out_dir, domain=domain, task=task, interval=10_000, verbose=1)
158
+
159
+ print(f"[INFO] Start training {algo.upper()} on {domain}/{task} for {total_timesteps} steps with {n_envs} envs")
160
+ model.learn(total_timesteps=total_timesteps, progress_bar=True, callback=ckpt_cb)
161
+
162
+ os.makedirs(out_dir, exist_ok=True)
163
+ timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
164
+ save_stem = f"sb3_{algo}_{domain}-{task}_seed{seed}_{timestamp}"
165
+ save_path = os.path.join(out_dir, save_stem)
166
+
167
+ model.save(save_path)
168
+ print(f"[INFO] Saved model to: {save_path}.zip")
169
+
170
+ vec_env.close()
171
+
172
+
173
+ def parse_args():
174
+ parser = argparse.ArgumentParser(description="Train dm_control task with Stable Baselines3 and save weights")
175
+ parser.add_argument("--domain", type=str, default="cheetah", help="dm_control domain (e.g., cheetah, quadruped)")
176
+ parser.add_argument("--task", type=str, default="run", help="dm_control task (e.g., run, walk)")
177
+ parser.add_argument("--algo", type=str, choices=list(ALGOS.keys()), default="sac", help="RL algorithm")
178
+ parser.add_argument("--total_timesteps", type=int, default=500_000, help="Total training steps")
179
+ parser.add_argument("--n_envs", type=int, default=1, help="Number of parallel envs")
180
+ parser.add_argument("--seed", type=int, default=0, help="Random seed")
181
+ parser.add_argument("--device", type=str, default="auto", help="Device: cpu, cuda, or auto")
182
+ parser.add_argument(
183
+ "--out_dir",
184
+ type=str,
185
+ default=os.path.join("/home/lau/sim/DynaTraj", "weights"),
186
+ help="Directory to save trained weights",
187
+ )
188
+ return parser.parse_args()
189
+
190
+
191
+ if __name__ == "__main__":
192
+ args = parse_args()
193
+
194
+ train(
195
+ domain=args.domain,
196
+ task=args.task,
197
+ algo=args.algo,
198
+ total_timesteps=args.total_timesteps,
199
+ n_envs=args.n_envs,
200
+ seed=args.seed,
201
+ device=args.device,
202
+ out_dir=args.out_dir,
203
+ )
weights/cheetah/run/ckpt-1.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0744eb2316e53296a7bb3811589465f1914b795240e1986654e5b82bd2d6c82
3
+ size 1459154
weights/cheetah/run/ckpt-10.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab0240eeb12715f4ba0f8b3bec6ba8f8725ca4fa75d57cb9b29b11c45203264f
3
+ size 1460598
weights/cheetah/run/ckpt-11.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f53d640d8594161a2dd0da4a3b6d4ee966dab8caf5e90370ecf5f14ba996622
3
+ size 1460598
weights/cheetah/run/ckpt-12.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:125da2ecf17e53970cbcf75077e85784201e318daff39ad6e9f494ca0bd35718
3
+ size 1460598
weights/cheetah/run/ckpt-13.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c6334cbc92cc2b4efed8b4d5691159a441dc1caf90fe1e9f5c3a4be39f913b7
3
+ size 1460598
weights/cheetah/run/ckpt-14.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a111154a97f3e9430cc4c8755bc5f8737a1c69762127c205ee63306d4fccd8d
3
+ size 1460598
weights/cheetah/run/ckpt-15.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b325021e2192d6c9f1d049e5a394133fd6eaa790bee8245b31620e80f6e0970a
3
+ size 1460598
weights/cheetah/run/ckpt-16.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e1b2cdb115dc8de995da07a795f0b53097f5cf2202cf324e4b9241eebd89386
3
+ size 1460598
weights/cheetah/run/ckpt-17.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c7d38bfafd40fb232168155bb45f5104b5784cfdc130ae0a6a1333d1d442d25
3
+ size 1460598
weights/cheetah/run/ckpt-18.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:416cee415fa810944cc8481187c37ecf6804b9182ac8b1d31033062f939662e3
3
+ size 1460598
weights/cheetah/run/ckpt-19.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eafb01b3ffd9b7b9c95d55957ef4d84adca155acac7f420515ec077b1677ab0b
3
+ size 1460598
weights/cheetah/run/ckpt-2.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65b441d70cfd553323456c5ba844afc0c8e4a7e15e637eedf6a624f31800e3a2
3
+ size 1459154
weights/cheetah/run/ckpt-20.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e7be04d7c13c84add27a93c0c6731cf9fa58cac5a8d6b3e2f441a29b2a7d4c2
3
+ size 1460598
weights/cheetah/run/ckpt-21.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:193aeb405461d5cd6e0d1073644b03995f53bcd1e7ab8221356deb0e1408cffe
3
+ size 1460598
weights/cheetah/run/ckpt-22.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1125e4a3768c66cb38d5049113efd0ca50d984649ada8da1530d33846602c98d
3
+ size 1460598
weights/cheetah/run/ckpt-23.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d06c5d9d14a729873b79a4e355ec3873f2c29f251f04de26b2c6badbc726e271
3
+ size 1460598
weights/cheetah/run/ckpt-24.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0639478f8c7d1ac6a938540a32692e396a2711400908900d40d54b1c1c72731e
3
+ size 1460598
weights/cheetah/run/ckpt-25.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cdc5913accef1dd2646c0490ad4fbd61ae84bbb52a316c004a38499a6c33d8a
3
+ size 1460598
weights/cheetah/run/ckpt-26.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0305e88d849bf903c3f1accc3305bc94b491c4b9768a1393c1e991978ce4eeec
3
+ size 1460598
weights/cheetah/run/ckpt-27.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7772f7939aafa0824d57462f7706d5694d0adb9c5c71b3334e364f8e821c8054
3
+ size 1460598
weights/cheetah/run/ckpt-28.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c80d7a3c7fa5de07ae9d85a2cea2f09f68587b6672c556110f5749492fdb8296
3
+ size 1460598
weights/cheetah/run/ckpt-29.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b8563160a488c65265aafd1b1dd5b9445ad590cac5818b137dcd8145b6e96e1
3
+ size 1460598
weights/cheetah/run/ckpt-3.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05dadc4b5300a18d0b90c1228ceb68c80b45be90effdd80fe9880fbae54471fc
3
+ size 1459154
weights/cheetah/run/ckpt-30.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd1e1bd5b24961734b43e2732a8440030203b9987c2a04c9aaf274c922e74943
3
+ size 1460598
weights/cheetah/run/ckpt-31.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47c2b5479e4bf6940a333ec7109995a94ed02526e102740f5cc923d9d30e2203
3
+ size 1460598
weights/cheetah/run/ckpt-32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:140b1c07a07562830610b587ad49b21d717afd40390496e82e5e0aa91b1229a6
3
+ size 1460598
weights/cheetah/run/ckpt-33.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6bb557e94f529f4cbd098cd21b11f9de098ab91546d1263f1094c97e2fb71948
3
+ size 1460598
weights/cheetah/run/ckpt-34.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd64da97d4f9ec48f051a5387b0834d2d49ca0ba8cd59bf8650f7dde76530df5
3
+ size 1460598
weights/cheetah/run/ckpt-35.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95a0abbd8aef1ad5edb6d243e43211d47ac62074d12f511a912833c0cd9f1351
3
+ size 1460598
weights/cheetah/run/ckpt-36.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce685c3e09e2aff6b090551cb73616cdd367041a7e7003c1c29ccc2418a92e8d
3
+ size 1460598
weights/cheetah/run/ckpt-37.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b5d14399d70e9bd4ce15b48f4893d7e91573b733851aafdc93ba935c6d6917f
3
+ size 1460598
weights/cheetah/run/ckpt-38.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ec0976035b637fd99fcd87aa2ab686a9c03735af8167d8ceba8694cca232608
3
+ size 1460598