The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
DynaTraj — Training and Data Collection (SB3 + dm_control)
This repo provides two entry-point scripts:
sb3_train.py: Train an RL policy with Stable-Baselines3 on dm_control tasks and periodically save policy checkpointssb3_collect.py: Load specific checkpoints and collect fixed-length trajectories into.npzdatasets with minimal metadata
Environment & Dependencies
- Python 3.10+
- Required:
- dm_control + MuJoCo:
pip install dm-control mujoco - Reinforcement learning:
pip install stable-baselines3 - Utilities:
pip install numpy tqdm torch - Gym API: prefers
gymnasium, falls back togym(handled automatically)
- dm_control + MuJoCo:
- Optional (only if you need on-screen rendering during collection):
- OpenCV UI:
pip install opencv-python - or Matplotlib:
pip install matplotlib
- OpenCV UI:
Note: On headless Linux/remote environments, rendering might require EGL/OSMesa configuration. If you do not render, you do not need any display backend.
Output Layout (Conventions)
- Training checkpoints:
weights/<domain>/<task>/ckpt-<k>.ptkstarts from 1; default save interval is every 10,000 timesteps- Examples:
ckpt-1.pt≈ 10k steps,ckpt-2.pt≈ 20k steps
- Full SB3 model:
weights/sb3_<algo>_<domain>-<task>_seed<seed>_<timestamp>.zip - Collected datasets:
dataset/*.npzwith a companion*_metadata.pkl
Default absolute paths in this repo:
- Checkpoint root:
/home/lau/sim/DynaTraj/weights - Dataset output:
/home/lau/sim/DynaTraj/dataset
Training
Script: sb3_train.py
Supported algorithms (argument is case-sensitive and must be lowercase here):
sac,ppo,td3
Common args:
--domain: dm_control domain, e.g.,cheetah,quadruped(default:cheetah)--task: task name, e.g.,run,walk(default:run)--algo:sac|ppo|td3(default:sac)--total_timesteps: total training steps (default:500000)--n_envs: number of parallel envs (default:1; uses sub-process vectorization if >1)--seed: random seed (default:0)--device:cpu|cuda|auto(default:auto)--out_dir: where to save models & checkpoints (default:/home/lau/sim/DynaTraj/weights)
Example:
python /home/lau/sim/DynaTraj/train_sb3_dmcontrol.py \
--domain cheetah --task run\
--algo sac \
--total_timesteps 500000 \
--out_dir /home/lau/sim/DynaTraj/weights
Checkpointing:
- A policy checkpoint is saved every 10,000 global timesteps as
ckpt-<k>.pt(1-based counter). - A full SB3 model
.zipis also saved at the end for plain SB3 loading if needed.
Data Collection
Script: sb3_collect.py
Purpose: Run inference with specific training checkpoints and write fixed-length trajectories to .npz files plus metadata.
Common args:
--domain/--task: must match training (defaults:cheetah/run)--ckpt_root: root directory of checkpoints (default:/home/lau/sim/DynaTraj/weights)--ckpt_indices: comma-separated checkpoint indices, e.g.,1,5,10(note: starts from 1)--trajectories_per_ckpt: how many trajectories per checkpoint (default:5120)--steps_per_trajectory: steps per trajectory (default:24)--out_dir: dataset output directory (default:/home/lau/sim/DynaTraj/dataset)--device: inference device (default:cpu)--render: optional flag to render frames (requires OpenCV or Matplotlib)--algo: usually unnecessary. The script reads the real algo name from the checkpoint payload. Only used as a fallback (must be UPPERCASE:SAC|PPO|TD3).
Example:
python sb3_collect.py \
--domain cheetah --task run \
--algo SAC \
--ckpt_root ./weights \
--ckpt_indices 1,20,30,50 \
--trajectories_per_ckpt 1024 \
--steps_per_trajectory 512 \
--out_dir ./dataset \
--device cpu
With rendering (optional):
python /home/lau/sim/DynaTraj/sb3_collect.py \
--domain cheetah --task run \
--algo SAC \
--ckpt_root /home/lau/sim/DynaTraj/weights \
--ckpt_indices 50 \
--trajectories_per_ckpt 5120 \
--steps_per_trajectory 24 \
--out_dir /home/lau/sim/DynaTraj/dataset \
--device cpu \
--render
For Bouncing ball: python bb_collect.py --trajectories 1024 --steps_per_trajectory 8192
Notes:
- The script searches checkpoints under
ckpt_root/<domain>/<task>/asckpt-<k>.pt. If you see an error forckpt-0.pt, switch to 1-based indices. - If your environment has no display backend, simply omit
--render.
Output Format
For each k:
- Dataset file:
<out_dir>/sb3_<domain>_<task>_ckpt<kNNN>_<timestamp>.npz - Metadata file:
<out_dir>/sb3_<domain>_<task>_ckpt<kNNN>_<timestamp>_metadata.pkl
Metadata keys include:
domain,task,algo,seedckpt_index(thekyou collected)trajectories_per_ckpt,steps_per_trajectorytotal_trajectories,total_stepsrender(whether rendering was enabled)
The exact arrays inside .npz are defined by the internal TrajectoryBuffer implementation. Typically they include time-aligned observation/state tensors, actions, rewards, and done flags.
Data Format
Transition:
a_t = pi(s_t)
s_t+1 = f(s_t, a_t)
1.Cheetah
- state
- qpos_t(x,z,pitch,joint) = 9
- qvel_t(x,z,pitch,joint) = 9
- action
- tau_t(joint) = 6
Tips
- Prefer absolute paths (all examples above use absolute paths).
- Training uses
DummyVecEnv/SubprocVecEnv, automatically flattens dm_control observations and clips actions to env bounds. - Collection reconstructs an SB3 policy and loads the
policy_state_dictfrom each checkpoint; it does not require the full.zipmodel. - Algo-arg case: training expects lowercase (
sac|ppo|td3), collection fallback uses UPPERCASE (SAC|PPO|TD3).
- Downloads last month
- 224