Upload folder using huggingface_hub
Browse files- docs/BENCHMARKS.md +26 -4
- docs/CHANGELOG.md +22 -0
- docs/CROSSQ_TRACKER.md +292 -0
- docs/IMPROVEMENTS_ROADMAP.md +226 -0
- docs/plots/Acrobot-v1_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Ant-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Breakout-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/CartPole-v1_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/HalfCheetah-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Hopper-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Humanoid-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/HumanoidStandup-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/InvertedDoublePendulum-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/InvertedPendulum-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/LunarLander-v3_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/LunarLanderContinuous-v3_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/MsPacman-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Pendulum-v1_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Pong-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Pusher-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Qbert-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Reacher-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Seaquest-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/SpaceInvaders-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Swimmer-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Walker2d-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
docs/BENCHMARKS.md
CHANGED
|
@@ -137,6 +137,7 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20
|
|
| 137 |
| A2C | ✅ | 496.68 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_cartpole_arc | [a2c_gae_cartpole_arc_2026_02_11_142531](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_cartpole_arc_2026_02_11_142531) |
|
| 138 |
| PPO | ✅ | 498.94 | [slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml) | ppo_cartpole_arc | [ppo_cartpole_arc_2026_02_11_144029](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_cartpole_arc_2026_02_11_144029) |
|
| 139 |
| SAC | ✅ | 406.09 | [slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml) | sac_cartpole_arc | [sac_cartpole_arc_2026_02_11_144155](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_cartpole_arc_2026_02_11_144155) |
|
|
|
|
| 140 |
|
| 141 |

|
| 142 |
|
|
@@ -153,6 +154,7 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20
|
|
| 153 |
| A2C | ✅ | -83.99 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_acrobot_arc | [a2c_gae_acrobot_arc_2026_02_11_153806](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_acrobot_arc_2026_02_11_153806) |
|
| 154 |
| PPO | ✅ | -81.28 | [slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml) | ppo_acrobot_arc | [ppo_acrobot_arc_2026_02_11_153758](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_acrobot_arc_2026_02_11_153758) |
|
| 155 |
| SAC | ✅ | -92.60 | [slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml) | sac_acrobot_arc | [sac_acrobot_arc_2026_02_11_162211](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_acrobot_arc_2026_02_11_162211) |
|
|
|
|
| 156 |
|
| 157 |

|
| 158 |
|
|
@@ -167,12 +169,13 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20
|
|
| 167 |
| A2C | ❌ | -820.74 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_pendulum_arc | [a2c_gae_pendulum_arc_2026_02_11_162217](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_pendulum_arc_2026_02_11_162217) |
|
| 168 |
| PPO | ✅ | -174.87 | [slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml) | ppo_pendulum_arc | [ppo_pendulum_arc_2026_02_11_162156](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_pendulum_arc_2026_02_11_162156) |
|
| 169 |
| SAC | ✅ | -150.97 | [slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml) | sac_pendulum_arc | [sac_pendulum_arc_2026_02_11_162240](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_pendulum_arc_2026_02_11_162240) |
|
|
|
|
| 170 |
|
| 171 |

|
| 172 |
|
| 173 |
### Phase 2: Box2D
|
| 174 |
|
| 175 |
-
#### 2.1 LunarLander-v3
|
| 176 |
|
| 177 |
**Docs**: [LunarLander](https://gymnasium.farama.org/environments/box2d/lunar_lander/) | State: Box(8) | Action: Discrete(4) | Target reward MA > 200
|
| 178 |
|
|
@@ -185,10 +188,11 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20
|
|
| 185 |
| A2C | ❌ | 27.38 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_lunar_arc | [a2c_gae_lunar_arc_2026_02_11_224304](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_lunar_arc_2026_02_11_224304) |
|
| 186 |
| PPO | ⚠️ | 183.30 | [slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml) | ppo_lunar_arc | [ppo_lunar_arc_2026_02_11_201303](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_lunar_arc_2026_02_11_201303) |
|
| 187 |
| SAC | ⚠️ | 106.17 | [slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml) | sac_lunar_arc | [sac_lunar_arc_2026_02_11_201417](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_lunar_arc_2026_02_11_201417) |
|
|
|
|
| 188 |
|
| 189 |
-
 | State: Box(8) | Action: Box(2) | Target reward MA > 200
|
| 194 |
|
|
@@ -199,8 +203,9 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20
|
|
| 199 |
| A2C | ❌ | -76.81 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_lunar_continuous_arc | [a2c_gae_lunar_continuous_arc_2026_02_11_224301](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_lunar_continuous_arc_2026_02_11_224301) |
|
| 200 |
| PPO | ⚠️ | 132.58 | [slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml) | ppo_lunar_continuous_arc | [ppo_lunar_continuous_arc_2026_02_11_224229](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_lunar_continuous_arc_2026_02_11_224229) |
|
| 201 |
| SAC | ⚠️ | 125.00 | [slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml) | sac_lunar_continuous_arc | [sac_lunar_continuous_arc_2026_02_12_222203](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_lunar_continuous_arc_2026_02_12_222203) |
|
|
|
|
| 202 |
|
| 203 |
-
 | ppo_ant_arc | [ppo_ant_arc_ant_2026_02_12_190644](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_ant_arc_ant_2026_02_12_190644) |
|
| 275 |
| SAC | ✅ | 4942.91 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_ant_arc | [sac_ant_arc_2026_02_11_225529](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_ant_arc_2026_02_11_225529) |
|
|
|
|
| 276 |
|
| 277 |

|
| 278 |
|
|
@@ -286,6 +292,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 286 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 287 |
| PPO | ✅ | 6240.68 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_arc | [ppo_mujoco_arc_halfcheetah_2026_02_12_195553](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_arc_halfcheetah_2026_02_12_195553) |
|
| 288 |
| SAC | ✅ | 9815.16 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_halfcheetah_arc | [sac_halfcheetah_4m_i2_arc_2026_02_14_185522](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_halfcheetah_4m_i2_arc_2026_02_14_185522) |
|
|
|
|
| 289 |
|
| 290 |

|
| 291 |
|
|
@@ -299,6 +306,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 299 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 300 |
| PPO | ⚠️ | 1653.74 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_hopper_arc | [ppo_hopper_arc_hopper_2026_02_12_222206](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_hopper_arc_hopper_2026_02_12_222206) |
|
| 301 |
| SAC | ⚠️ | 1416.52 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_hopper_arc | [sac_hopper_3m_i4_arc_2026_02_14_185434](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_hopper_3m_i4_arc_2026_02_14_185434) |
|
|
|
|
| 302 |
|
| 303 |

|
| 304 |
|
|
@@ -312,6 +320,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 312 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 313 |
| PPO | ✅ | 2661.26 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_arc | [ppo_mujoco_arc_humanoid_2026_02_12_185439](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_arc_humanoid_2026_02_12_185439) |
|
| 314 |
| SAC | ✅ | 1989.65 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_humanoid_arc | [sac_humanoid_arc_2026_02_12_020016](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_humanoid_arc_2026_02_12_020016) |
|
|
|
|
| 315 |
|
| 316 |

|
| 317 |
|
|
@@ -325,6 +334,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 325 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 326 |
| PPO | ✅ | 150104.59 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_arc | [ppo_mujoco_arc_humanoidstandup_2026_02_12_115050](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_arc_humanoidstandup_2026_02_12_115050) |
|
| 327 |
| SAC | ✅ | 137357.00 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_humanoid_standup_arc | [sac_humanoid_standup_arc_2026_02_12_225150](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_humanoid_standup_arc_2026_02_12_225150) |
|
|
|
|
| 328 |
|
| 329 |

|
| 330 |
|
|
@@ -338,6 +348,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 338 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 339 |
| PPO | ✅ | 8383.76 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_inverted_double_pendulum_arc | [ppo_inverted_double_pendulum_arc_inverteddoublependulum_2026_02_12_225231](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_inverted_double_pendulum_arc_inverteddoublependulum_2026_02_12_225231) |
|
| 340 |
| SAC | ✅ | 9032.67 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_inverted_double_pendulum_arc | [sac_inverted_double_pendulum_arc_2026_02_12_025206](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_inverted_double_pendulum_arc_2026_02_12_025206) |
|
|
|
|
| 341 |
|
| 342 |

|
| 343 |
|
|
@@ -351,6 +362,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 351 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 352 |
| PPO | ✅ | 949.94 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_inverted_pendulum_arc | [ppo_inverted_pendulum_arc_invertedpendulum_2026_02_12_062037](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_inverted_pendulum_arc_invertedpendulum_2026_02_12_062037) |
|
| 353 |
| SAC | ✅ | 928.43 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_inverted_pendulum_arc | [sac_inverted_pendulum_arc_2026_02_12_225503](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_inverted_pendulum_arc_2026_02_12_225503) |
|
|
|
|
| 354 |
|
| 355 |

|
| 356 |
|
|
@@ -364,6 +376,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 364 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 365 |
| PPO | ✅ | -49.59 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_longhorizon_arc | [ppo_mujoco_longhorizon_arc_pusher_2026_02_12_222228](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_longhorizon_arc_pusher_2026_02_12_222228) |
|
| 366 |
| SAC | ✅ | -43.00 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_pusher_arc | [sac_pusher_arc_2026_02_12_053603](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_pusher_arc_2026_02_12_053603) |
|
|
|
|
| 367 |
|
| 368 |

|
| 369 |
|
|
@@ -377,6 +390,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 377 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 378 |
| PPO | ✅ | -5.03 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_longhorizon_arc | [ppo_mujoco_longhorizon_arc_reacher_2026_02_12_115033](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_longhorizon_arc_reacher_2026_02_12_115033) |
|
| 379 |
| SAC | ✅ | -6.31 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_reacher_arc | [sac_reacher_arc_2026_02_12_055200](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_reacher_arc_2026_02_12_055200) |
|
|
|
|
| 380 |
|
| 381 |

|
| 382 |
|
|
@@ -390,6 +404,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 390 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 391 |
| PPO | ✅ | 282.44 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_swimmer_arc | [ppo_swimmer_arc_swimmer_2026_02_12_100445](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_swimmer_arc_swimmer_2026_02_12_100445) |
|
| 392 |
| SAC | ✅ | 301.34 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_swimmer_arc | [sac_swimmer_arc_2026_02_12_054349](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_swimmer_arc_2026_02_12_054349) |
|
|
|
|
| 393 |
|
| 394 |

|
| 395 |
|
|
@@ -403,6 +418,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 403 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 404 |
| PPO | ✅ | 4378.62 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_arc | [ppo_mujoco_arc_walker2d_2026_02_12_190312](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_arc_walker2d_2026_02_12_190312) |
|
| 405 |
| SAC | ⚠️ | 3123.66 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_walker2d_arc | [sac_walker2d_3m_i4_arc_2026_02_14_185550](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_walker2d_3m_i4_arc_2026_02_14_185550) |
|
|
|
|
| 406 |
|
| 407 |

|
| 408 |
|
|
@@ -491,6 +507,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \
|
|
| 491 |
| ALE/Breakout-v5 | 326.47 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_breakout_2026_02_13_230455](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_breakout_2026_02_13_230455) |
|
| 492 |
| | 20.23 | sac_atari_arc | [sac_atari_arc_breakout_2026_02_15_201235](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_breakout_2026_02_15_201235) |
|
| 493 |
| | 273 | a2c_gae_atari_arc | [a2c_gae_atari_breakout_2026_01_31_213610](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_breakout_2026_01_31_213610) |
|
|
|
|
| 494 |
| ALE/Carnival-v5 | 3912.59 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_carnival_2026_02_13_230438](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_carnival_2026_02_13_230438) |
|
| 495 |
| | 3501.37 | sac_atari_arc | [sac_atari_arc_carnival_2026_02_17_105834](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_carnival_2026_02_17_105834) |
|
| 496 |
| | 2170 | a2c_gae_atari_arc | [a2c_gae_atari_carnival_2026_02_01_082726](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_carnival_2026_02_01_082726) |
|
|
@@ -554,6 +571,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \
|
|
| 554 |
| ALE/MsPacman-v5 | 2330.74 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_mspacman_2026_02_14_102435](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_mspacman_2026_02_14_102435) |
|
| 555 |
| | 1336.96 | sac_atari_arc | [sac_atari_arc_mspacman_2026_02_17_221523](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_mspacman_2026_02_17_221523) |
|
| 556 |
| | 2110 | a2c_gae_atari_arc | [a2c_gae_atari_mspacman_2026_02_01_001100](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_mspacman_2026_02_01_001100) |
|
|
|
|
| 557 |
| ALE/NameThisGame-v5 | 6879.23 | ppo_atari_arc | [ppo_atari_arc_namethisgame_2026_02_14_103319](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_namethisgame_2026_02_14_103319) |
|
| 558 |
| | 3992.71 | sac_atari_arc | [sac_atari_arc_namethisgame_2026_02_17_220905](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_namethisgame_2026_02_17_220905) |
|
| 559 |
| | 5412 | a2c_gae_atari_arc | [a2c_gae_atari_namethisgame_2026_02_01_132733](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_namethisgame_2026_02_01_132733) |
|
|
@@ -563,12 +581,14 @@ source .env && slm-lab run-remote --gpu -s env=ENV \
|
|
| 563 |
| ALE/Pong-v5 | 16.69 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_pong_2026_02_14_103722](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_pong_2026_02_14_103722) |
|
| 564 |
| | 10.89 | sac_atari_arc | [sac_atari_arc_pong_2026_02_17_160429](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_pong_2026_02_17_160429) |
|
| 565 |
| | 10.17 | a2c_gae_atari_arc | [a2c_gae_atari_pong_2026_01_31_213635](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_pong_2026_01_31_213635) |
|
|
|
|
| 566 |
| ALE/Pooyan-v5 | 5308.66 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_pooyan_2026_02_14_114730](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_pooyan_2026_02_14_114730) |
|
| 567 |
| | 2530.78 | sac_atari_arc | [sac_atari_arc_pooyan_2026_02_17_220346](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_pooyan_2026_02_17_220346) |
|
| 568 |
| | 2997 | a2c_gae_atari_arc | [a2c_gae_atari_pooyan_2026_02_01_132748](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_pooyan_2026_02_01_132748) |
|
| 569 |
| ALE/Qbert-v5 | 15460.48 | ppo_atari_arc | [ppo_atari_arc_qbert_2026_02_14_120409](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_qbert_2026_02_14_120409) |
|
| 570 |
| | 3331.98 | sac_atari_arc | [sac_atari_arc_qbert_2026_02_17_223117](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_qbert_2026_02_17_223117) |
|
| 571 |
| | 12619 | a2c_gae_atari_arc | [a2c_gae_atari_qbert_2026_01_31_213720](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_qbert_2026_01_31_213720) |
|
|
|
|
| 572 |
| ALE/Riverraid-v5 | 9599.75 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_riverraid_2026_02_14_124700](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_riverraid_2026_02_14_124700) |
|
| 573 |
| | 4744.95 | sac_atari_arc | [sac_atari_arc_riverraid_2026_02_18_014310](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_riverraid_2026_02_18_014310) |
|
| 574 |
| | 6558 | a2c_gae_atari_arc | [a2c_gae_atari_riverraid_2026_02_01_132507](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_riverraid_2026_02_01_132507) |
|
|
@@ -581,6 +601,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \
|
|
| 581 |
| ALE/Seaquest-v5 | 1775.14 | ppo_atari_arc | [ppo_atari_arc_seaquest_2026_02_11_095444](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_seaquest_2026_02_11_095444) |
|
| 582 |
| | 1565.44 | sac_atari_arc | [sac_atari_arc_seaquest_2026_02_18_020822](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_seaquest_2026_02_18_020822) |
|
| 583 |
| | 850 | a2c_gae_atari_arc | [a2c_gae_atari_seaquest_2026_02_01_001001](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_seaquest_2026_02_01_001001) |
|
|
|
|
| 584 |
| ALE/Skiing-v5 | -28217.28 | ppo_atari_arc | [ppo_atari_arc_skiing_2026_02_14_174807](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_skiing_2026_02_14_174807) |
|
| 585 |
| | -17464.22 | sac_atari_arc | [sac_atari_arc_skiing_2026_02_18_024444](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_skiing_2026_02_18_024444) |
|
| 586 |
| | -14235 | a2c_gae_atari_arc | [a2c_gae_atari_skiing_2026_02_01_132451](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_skiing_2026_02_01_132451) |
|
|
@@ -590,6 +611,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \
|
|
| 590 |
| ALE/SpaceInvaders-v5 | 892.49 | ppo_atari_arc | [ppo_atari_arc_spaceinvaders_2026_02_14_131114](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_spaceinvaders_2026_02_14_131114) |
|
| 591 |
| | 507.33 | sac_atari_arc | [sac_atari_arc_spaceinvaders_2026_02_18_033139](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_spaceinvaders_2026_02_18_033139) |
|
| 592 |
| | 784 | a2c_gae_atari_arc | [a2c_gae_atari_spaceinvaders_2026_02_01_000950](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_spaceinvaders_2026_02_01_000950) |
|
|
|
|
| 593 |
| ALE/StarGunner-v5 | 49328.73 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_stargunner_2026_02_14_131149](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_stargunner_2026_02_14_131149) |
|
| 594 |
| | 4295.97 | sac_atari_arc | [sac_atari_arc_stargunner_2026_02_18_033151](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_stargunner_2026_02_18_033151) |
|
| 595 |
| | 8665 | a2c_gae_atari_arc | [a2c_gae_atari_stargunner_2026_02_01_132406](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_stargunner_2026_02_01_132406) |
|
|
|
|
| 137 |
| A2C | ✅ | 496.68 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_cartpole_arc | [a2c_gae_cartpole_arc_2026_02_11_142531](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_cartpole_arc_2026_02_11_142531) |
|
| 138 |
| PPO | ✅ | 498.94 | [slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml) | ppo_cartpole_arc | [ppo_cartpole_arc_2026_02_11_144029](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_cartpole_arc_2026_02_11_144029) |
|
| 139 |
| SAC | ✅ | 406.09 | [slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml) | sac_cartpole_arc | [sac_cartpole_arc_2026_02_11_144155](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_cartpole_arc_2026_02_11_144155) |
|
| 140 |
+
| CrossQ | ✅ | 405.88 | [slm_lab/spec/benchmark/crossq/crossq_classic.yaml](../slm_lab/spec/benchmark/crossq/crossq_classic.yaml) | crossq_cartpole | [crossq_cartpole_arc_2026_02_21_100045](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_cartpole_arc_2026_02_21_100045) |
|
| 141 |
|
| 142 |

|
| 143 |
|
|
|
|
| 154 |
| A2C | ✅ | -83.99 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_acrobot_arc | [a2c_gae_acrobot_arc_2026_02_11_153806](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_acrobot_arc_2026_02_11_153806) |
|
| 155 |
| PPO | ✅ | -81.28 | [slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml) | ppo_acrobot_arc | [ppo_acrobot_arc_2026_02_11_153758](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_acrobot_arc_2026_02_11_153758) |
|
| 156 |
| SAC | ✅ | -92.60 | [slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml) | sac_acrobot_arc | [sac_acrobot_arc_2026_02_11_162211](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_acrobot_arc_2026_02_11_162211) |
|
| 157 |
+
| CrossQ | ✅ | -103.89 | [slm_lab/spec/benchmark/crossq/crossq_classic.yaml](../slm_lab/spec/benchmark/crossq/crossq_classic.yaml) | crossq_acrobot | [crossq_acrobot_2026_02_23_122342](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_acrobot_2026_02_23_122342) |
|
| 158 |
|
| 159 |

|
| 160 |
|
|
|
|
| 169 |
| A2C | ❌ | -820.74 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_pendulum_arc | [a2c_gae_pendulum_arc_2026_02_11_162217](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_pendulum_arc_2026_02_11_162217) |
|
| 170 |
| PPO | ✅ | -174.87 | [slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml) | ppo_pendulum_arc | [ppo_pendulum_arc_2026_02_11_162156](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_pendulum_arc_2026_02_11_162156) |
|
| 171 |
| SAC | ✅ | -150.97 | [slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml) | sac_pendulum_arc | [sac_pendulum_arc_2026_02_11_162240](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_pendulum_arc_2026_02_11_162240) |
|
| 172 |
+
| CrossQ | ✅ | -163.52 | [slm_lab/spec/benchmark/crossq/crossq_classic.yaml](../slm_lab/spec/benchmark/crossq/crossq_classic.yaml) | crossq_pendulum | [crossq_pendulum_2026_02_21_123841](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_pendulum_2026_02_21_123841) |
|
| 173 |
|
| 174 |

|
| 175 |
|
| 176 |
### Phase 2: Box2D
|
| 177 |
|
| 178 |
+
#### 2.1 LunarLander-v3
|
| 179 |
|
| 180 |
**Docs**: [LunarLander](https://gymnasium.farama.org/environments/box2d/lunar_lander/) | State: Box(8) | Action: Discrete(4) | Target reward MA > 200
|
| 181 |
|
|
|
|
| 188 |
| A2C | ❌ | 27.38 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_lunar_arc | [a2c_gae_lunar_arc_2026_02_11_224304](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_lunar_arc_2026_02_11_224304) |
|
| 189 |
| PPO | ⚠️ | 183.30 | [slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml) | ppo_lunar_arc | [ppo_lunar_arc_2026_02_11_201303](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_lunar_arc_2026_02_11_201303) |
|
| 190 |
| SAC | ⚠️ | 106.17 | [slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml) | sac_lunar_arc | [sac_lunar_arc_2026_02_11_201417](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_lunar_arc_2026_02_11_201417) |
|
| 191 |
+
| CrossQ | ❌ | 136.25 | [slm_lab/spec/benchmark/crossq/crossq_box2d.yaml](../slm_lab/spec/benchmark/crossq/crossq_box2d.yaml) | crossq_lunar | [crossq_lunar_2026_02_21_123730](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_lunar_2026_02_21_123730) |
|
| 192 |
|
| 193 |
+

|
| 194 |
|
| 195 |
+
#### 2.2 LunarLanderContinuous-v3
|
| 196 |
|
| 197 |
**Docs**: [LunarLander](https://gymnasium.farama.org/environments/box2d/lunar_lander/) | State: Box(8) | Action: Box(2) | Target reward MA > 200
|
| 198 |
|
|
|
|
| 203 |
| A2C | ❌ | -76.81 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_lunar_continuous_arc | [a2c_gae_lunar_continuous_arc_2026_02_11_224301](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_lunar_continuous_arc_2026_02_11_224301) |
|
| 204 |
| PPO | ⚠️ | 132.58 | [slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml) | ppo_lunar_continuous_arc | [ppo_lunar_continuous_arc_2026_02_11_224229](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_lunar_continuous_arc_2026_02_11_224229) |
|
| 205 |
| SAC | ⚠️ | 125.00 | [slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml) | sac_lunar_continuous_arc | [sac_lunar_continuous_arc_2026_02_12_222203](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_lunar_continuous_arc_2026_02_12_222203) |
|
| 206 |
+
| CrossQ | ✅ | 249.85 | [slm_lab/spec/benchmark/crossq/crossq_box2d.yaml](../slm_lab/spec/benchmark/crossq/crossq_box2d.yaml) | crossq_lunar_continuous | [crossq_lunar_continuous_arc_2026_02_21_100052](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_lunar_continuous_arc_2026_02_21_100052) |
|
| 207 |
|
| 208 |
+

|
| 209 |
|
| 210 |
### Phase 3: MuJoCo
|
| 211 |
|
|
|
|
| 278 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 279 |
| PPO | ✅ | 2138.28 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_ant_arc | [ppo_ant_arc_ant_2026_02_12_190644](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_ant_arc_ant_2026_02_12_190644) |
|
| 280 |
| SAC | ✅ | 4942.91 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_ant_arc | [sac_ant_arc_2026_02_11_225529](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_ant_arc_2026_02_11_225529) |
|
| 281 |
+
| CrossQ | ✅ | 5108.47 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_ant_ln_7m | [crossq_ant_ln_7m_2026_02_22_015136](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_ant_ln_7m_2026_02_22_015136) |
|
| 282 |
|
| 283 |

|
| 284 |
|
|
|
|
| 292 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 293 |
| PPO | ✅ | 6240.68 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_arc | [ppo_mujoco_arc_halfcheetah_2026_02_12_195553](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_arc_halfcheetah_2026_02_12_195553) |
|
| 294 |
| SAC | ✅ | 9815.16 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_halfcheetah_arc | [sac_halfcheetah_4m_i2_arc_2026_02_14_185522](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_halfcheetah_4m_i2_arc_2026_02_14_185522) |
|
| 295 |
+
| CrossQ | ✅ | 9969.18 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_halfcheetah_ln_8m | [crossq_halfcheetah_ln_8m_2026_02_22_111117](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_halfcheetah_ln_8m_2026_02_22_111117) |
|
| 296 |
|
| 297 |

|
| 298 |
|
|
|
|
| 306 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 307 |
| PPO | ⚠️ | 1653.74 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_hopper_arc | [ppo_hopper_arc_hopper_2026_02_12_222206](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_hopper_arc_hopper_2026_02_12_222206) |
|
| 308 |
| SAC | ⚠️ | 1416.52 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_hopper_arc | [sac_hopper_3m_i4_arc_2026_02_14_185434](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_hopper_3m_i4_arc_2026_02_14_185434) |
|
| 309 |
+
| CrossQ | ⚠️ | 1295.21 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_hopper | [crossq_hopper_2026_02_21_173921](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_hopper_2026_02_21_173921) |
|
| 310 |
|
| 311 |

|
| 312 |
|
|
|
|
| 320 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 321 |
| PPO | ✅ | 2661.26 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_arc | [ppo_mujoco_arc_humanoid_2026_02_12_185439](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_arc_humanoid_2026_02_12_185439) |
|
| 322 |
| SAC | ✅ | 1989.65 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_humanoid_arc | [sac_humanoid_arc_2026_02_12_020016](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_humanoid_arc_2026_02_12_020016) |
|
| 323 |
+
| CrossQ | ✅ | 1850.44 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_humanoid_ln_i2 | [crossq_humanoid_ln_i2_2026_02_22_014755](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_humanoid_ln_i2_2026_02_22_014755) |
|
| 324 |
|
| 325 |

|
| 326 |
|
|
|
|
| 334 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 335 |
| PPO | ✅ | 150104.59 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_arc | [ppo_mujoco_arc_humanoidstandup_2026_02_12_115050](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_arc_humanoidstandup_2026_02_12_115050) |
|
| 336 |
| SAC | ✅ | 137357.00 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_humanoid_standup_arc | [sac_humanoid_standup_arc_2026_02_12_225150](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_humanoid_standup_arc_2026_02_12_225150) |
|
| 337 |
+
| CrossQ | ✅ | 154162.28 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_humanoid_standup_v2 | [crossq_humanoid_standup_v2_2026_02_22_155517](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_humanoid_standup_v2_2026_02_22_155517) |
|
| 338 |
|
| 339 |

|
| 340 |
|
|
|
|
| 348 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 349 |
| PPO | ✅ | 8383.76 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_inverted_double_pendulum_arc | [ppo_inverted_double_pendulum_arc_inverteddoublependulum_2026_02_12_225231](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_inverted_double_pendulum_arc_inverteddoublependulum_2026_02_12_225231) |
|
| 350 |
| SAC | ✅ | 9032.67 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_inverted_double_pendulum_arc | [sac_inverted_double_pendulum_arc_2026_02_12_025206](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_inverted_double_pendulum_arc_2026_02_12_025206) |
|
| 351 |
+
| CrossQ | ⚠️ | 8255.82 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_inverted_double_pendulum_v2 | [crossq_inverted_double_pendulum_v2_2026_02_22_155616](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_inverted_double_pendulum_v2_2026_02_22_155616) |
|
| 352 |
|
| 353 |

|
| 354 |
|
|
|
|
| 362 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 363 |
| PPO | ✅ | 949.94 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_inverted_pendulum_arc | [ppo_inverted_pendulum_arc_invertedpendulum_2026_02_12_062037](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_inverted_pendulum_arc_invertedpendulum_2026_02_12_062037) |
|
| 364 |
| SAC | ✅ | 928.43 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_inverted_pendulum_arc | [sac_inverted_pendulum_arc_2026_02_12_225503](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_inverted_pendulum_arc_2026_02_12_225503) |
|
| 365 |
+
| CrossQ | ⚠️ | 841.87 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_inverted_pendulum | [crossq_inverted_pendulum_2026_02_21_134607](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_inverted_pendulum_2026_02_21_134607) |
|
| 366 |
|
| 367 |

|
| 368 |
|
|
|
|
| 376 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 377 |
| PPO | ✅ | -49.59 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_longhorizon_arc | [ppo_mujoco_longhorizon_arc_pusher_2026_02_12_222228](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_longhorizon_arc_pusher_2026_02_12_222228) |
|
| 378 |
| SAC | ✅ | -43.00 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_pusher_arc | [sac_pusher_arc_2026_02_12_053603](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_pusher_arc_2026_02_12_053603) |
|
| 379 |
+
| CrossQ | ✅ | -37.08 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_pusher | [crossq_pusher_2026_02_21_134637](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_pusher_2026_02_21_134637) |
|
| 380 |
|
| 381 |

|
| 382 |
|
|
|
|
| 390 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 391 |
| PPO | ✅ | -5.03 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_longhorizon_arc | [ppo_mujoco_longhorizon_arc_reacher_2026_02_12_115033](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_longhorizon_arc_reacher_2026_02_12_115033) |
|
| 392 |
| SAC | ✅ | -6.31 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_reacher_arc | [sac_reacher_arc_2026_02_12_055200](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_reacher_arc_2026_02_12_055200) |
|
| 393 |
+
| CrossQ | ✅ | -5.66 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_reacher | [crossq_reacher_2026_02_21_134606](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_reacher_2026_02_21_134606) |
|
| 394 |
|
| 395 |

|
| 396 |
|
|
|
|
| 404 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 405 |
| PPO | ✅ | 282.44 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_swimmer_arc | [ppo_swimmer_arc_swimmer_2026_02_12_100445](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_swimmer_arc_swimmer_2026_02_12_100445) |
|
| 406 |
| SAC | ✅ | 301.34 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_swimmer_arc | [sac_swimmer_arc_2026_02_12_054349](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_swimmer_arc_2026_02_12_054349) |
|
| 407 |
+
| CrossQ | ✅ | 221.12 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_swimmer | [crossq_swimmer_2026_02_21_134711](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_swimmer_2026_02_21_134711) |
|
| 408 |
|
| 409 |

|
| 410 |
|
|
|
|
| 418 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 419 |
| PPO | ✅ | 4378.62 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_arc | [ppo_mujoco_arc_walker2d_2026_02_12_190312](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_arc_walker2d_2026_02_12_190312) |
|
| 420 |
| SAC | ⚠️ | 3123.66 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_walker2d_arc | [sac_walker2d_3m_i4_arc_2026_02_14_185550](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_walker2d_3m_i4_arc_2026_02_14_185550) |
|
| 421 |
+
| CrossQ | ✅ | 4277.15 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_walker2d_ln_7m | [crossq_walker2d_ln_7m_2026_02_22_014846](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_walker2d_ln_7m_2026_02_22_014846) |
|
| 422 |
|
| 423 |

|
| 424 |
|
|
|
|
| 507 |
| ALE/Breakout-v5 | 326.47 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_breakout_2026_02_13_230455](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_breakout_2026_02_13_230455) |
|
| 508 |
| | 20.23 | sac_atari_arc | [sac_atari_arc_breakout_2026_02_15_201235](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_breakout_2026_02_15_201235) |
|
| 509 |
| | 273 | a2c_gae_atari_arc | [a2c_gae_atari_breakout_2026_01_31_213610](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_breakout_2026_01_31_213610) |
|
| 510 |
+
| | ❌ 0.91 | crossq_atari_breakout | [crossq_atari_breakout_2026_02_21_123715](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/crossq_atari_breakout_2026_02_21_123715) |
|
| 511 |
| ALE/Carnival-v5 | 3912.59 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_carnival_2026_02_13_230438](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_carnival_2026_02_13_230438) |
|
| 512 |
| | 3501.37 | sac_atari_arc | [sac_atari_arc_carnival_2026_02_17_105834](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_carnival_2026_02_17_105834) |
|
| 513 |
| | 2170 | a2c_gae_atari_arc | [a2c_gae_atari_carnival_2026_02_01_082726](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_carnival_2026_02_01_082726) |
|
|
|
|
| 571 |
| ALE/MsPacman-v5 | 2330.74 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_mspacman_2026_02_14_102435](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_mspacman_2026_02_14_102435) |
|
| 572 |
| | 1336.96 | sac_atari_arc | [sac_atari_arc_mspacman_2026_02_17_221523](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_mspacman_2026_02_17_221523) |
|
| 573 |
| | 2110 | a2c_gae_atari_arc | [a2c_gae_atari_mspacman_2026_02_01_001100](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_mspacman_2026_02_01_001100) |
|
| 574 |
+
| | ❌ 238.51 | crossq_atari_mspacman | [crossq_atari_mspacman_2026_02_21_123827](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/crossq_atari_mspacman_2026_02_21_123827) |
|
| 575 |
| ALE/NameThisGame-v5 | 6879.23 | ppo_atari_arc | [ppo_atari_arc_namethisgame_2026_02_14_103319](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_namethisgame_2026_02_14_103319) |
|
| 576 |
| | 3992.71 | sac_atari_arc | [sac_atari_arc_namethisgame_2026_02_17_220905](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_namethisgame_2026_02_17_220905) |
|
| 577 |
| | 5412 | a2c_gae_atari_arc | [a2c_gae_atari_namethisgame_2026_02_01_132733](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_namethisgame_2026_02_01_132733) |
|
|
|
|
| 581 |
| ALE/Pong-v5 | 16.69 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_pong_2026_02_14_103722](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_pong_2026_02_14_103722) |
|
| 582 |
| | 10.89 | sac_atari_arc | [sac_atari_arc_pong_2026_02_17_160429](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_pong_2026_02_17_160429) |
|
| 583 |
| | 10.17 | a2c_gae_atari_arc | [a2c_gae_atari_pong_2026_01_31_213635](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_pong_2026_01_31_213635) |
|
| 584 |
+
| | ❌ -20.82 | crossq_atari_pong | [crossq_atari_pong_2026_02_21_123746](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/crossq_atari_pong_2026_02_21_123746) |
|
| 585 |
| ALE/Pooyan-v5 | 5308.66 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_pooyan_2026_02_14_114730](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_pooyan_2026_02_14_114730) |
|
| 586 |
| | 2530.78 | sac_atari_arc | [sac_atari_arc_pooyan_2026_02_17_220346](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_pooyan_2026_02_17_220346) |
|
| 587 |
| | 2997 | a2c_gae_atari_arc | [a2c_gae_atari_pooyan_2026_02_01_132748](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_pooyan_2026_02_01_132748) |
|
| 588 |
| ALE/Qbert-v5 | 15460.48 | ppo_atari_arc | [ppo_atari_arc_qbert_2026_02_14_120409](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_qbert_2026_02_14_120409) |
|
| 589 |
| | 3331.98 | sac_atari_arc | [sac_atari_arc_qbert_2026_02_17_223117](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_qbert_2026_02_17_223117) |
|
| 590 |
| | 12619 | a2c_gae_atari_arc | [a2c_gae_atari_qbert_2026_01_31_213720](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_qbert_2026_01_31_213720) |
|
| 591 |
+
| | ✅ 4268.66 | crossq_atari_qbert | [crossq_atari_qbert_2026_02_21_121014](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/crossq_atari_qbert_2026_02_21_121014) |
|
| 592 |
| ALE/Riverraid-v5 | 9599.75 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_riverraid_2026_02_14_124700](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_riverraid_2026_02_14_124700) |
|
| 593 |
| | 4744.95 | sac_atari_arc | [sac_atari_arc_riverraid_2026_02_18_014310](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_riverraid_2026_02_18_014310) |
|
| 594 |
| | 6558 | a2c_gae_atari_arc | [a2c_gae_atari_riverraid_2026_02_01_132507](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_riverraid_2026_02_01_132507) |
|
|
|
|
| 601 |
| ALE/Seaquest-v5 | 1775.14 | ppo_atari_arc | [ppo_atari_arc_seaquest_2026_02_11_095444](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_seaquest_2026_02_11_095444) |
|
| 602 |
| | 1565.44 | sac_atari_arc | [sac_atari_arc_seaquest_2026_02_18_020822](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_seaquest_2026_02_18_020822) |
|
| 603 |
| | 850 | a2c_gae_atari_arc | [a2c_gae_atari_seaquest_2026_02_01_001001](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_seaquest_2026_02_01_001001) |
|
| 604 |
+
| | ❌ 216.19 | crossq_atari_seaquest | [crossq_atari_seaquest_2026_02_21_123316](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/crossq_atari_seaquest_2026_02_21_123316) |
|
| 605 |
| ALE/Skiing-v5 | -28217.28 | ppo_atari_arc | [ppo_atari_arc_skiing_2026_02_14_174807](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_skiing_2026_02_14_174807) |
|
| 606 |
| | -17464.22 | sac_atari_arc | [sac_atari_arc_skiing_2026_02_18_024444](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_skiing_2026_02_18_024444) |
|
| 607 |
| | -14235 | a2c_gae_atari_arc | [a2c_gae_atari_skiing_2026_02_01_132451](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_skiing_2026_02_01_132451) |
|
|
|
|
| 611 |
| ALE/SpaceInvaders-v5 | 892.49 | ppo_atari_arc | [ppo_atari_arc_spaceinvaders_2026_02_14_131114](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_spaceinvaders_2026_02_14_131114) |
|
| 612 |
| | 507.33 | sac_atari_arc | [sac_atari_arc_spaceinvaders_2026_02_18_033139](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_spaceinvaders_2026_02_18_033139) |
|
| 613 |
| | 784 | a2c_gae_atari_arc | [a2c_gae_atari_spaceinvaders_2026_02_01_000950](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_spaceinvaders_2026_02_01_000950) |
|
| 614 |
+
| | ❌ 360.37 | crossq_atari_spaceinvaders | [crossq_atari_spaceinvaders_2026_02_21_123410](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/crossq_atari_spaceinvaders_2026_02_21_123410) |
|
| 615 |
| ALE/StarGunner-v5 | 49328.73 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_stargunner_2026_02_14_131149](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_stargunner_2026_02_14_131149) |
|
| 616 |
| | 4295.97 | sac_atari_arc | [sac_atari_arc_stargunner_2026_02_18_033151](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_stargunner_2026_02_18_033151) |
|
| 617 |
| | 8665 | a2c_gae_atari_arc | [a2c_gae_atari_stargunner_2026_02_01_132406](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_stargunner_2026_02_01_132406) |
|
docs/CHANGELOG.md
CHANGED
|
@@ -1,3 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# SLM-Lab v5.1.0
|
| 2 |
|
| 3 |
TorchArc YAML benchmarks replace original hardcoded network architectures across all benchmark categories.
|
|
|
|
| 1 |
+
# SLM-Lab v5.2.0
|
| 2 |
+
|
| 3 |
+
Training path performance optimization. **+15% SAC throughput on GPU**, verified with no score regression.
|
| 4 |
+
|
| 5 |
+
**What changed (18 files):**
|
| 6 |
+
- `polyak_update`: in-place `lerp_()` replaces 3-op manual arithmetic
|
| 7 |
+
- `SAC`: single `log_softmax→exp` replaces dual softmax+log_softmax; cached entropy between policy/alpha loss; cached `_is_per` and `_LOG2`
|
| 8 |
+
- `to_torch_batch`: uint8/float16 sent directly to GPU then `.float()` — avoids 4x CPU float32 intermediate (matters for Atari 84x84x4)
|
| 9 |
+
- `SumTree`: iterative propagation/retrieval replaces recursion; vectorized sampling
|
| 10 |
+
- `forward_tails`: cached output (was called twice per step)
|
| 11 |
+
- `VectorFullGameStatistics`: `deque(maxlen=N)` + `np.flatnonzero` replaces list+pop(0)+loop
|
| 12 |
+
- `pydash→builtins`: `isinstance` over `ps.is_list/is_dict`, dict comprehensions over `ps.pick/ps.omit` in hot paths
|
| 13 |
+
- `PPO`: `total_loss` as plain float prevents computation graph leak across epochs
|
| 14 |
+
- Minor: `hasattr→is not None` in conv/recurrent forward, cached `_is_dev`, `no_decay` early exit in VarScheduler
|
| 15 |
+
|
| 16 |
+
**Measured gains (normalized, same hardware A/B on RTX 3090):**
|
| 17 |
+
- SAC MuJoCo: +15-17% fps
|
| 18 |
+
- SAC Atari: +14% fps
|
| 19 |
+
- PPO: ~0% (env-bound; most optimizations target SAC's training-heavy inner loop — PPO doesn't use polyak, replay buffer, twin Q, or entropy tuning)
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
# SLM-Lab v5.1.0
|
| 24 |
|
| 25 |
TorchArc YAML benchmarks replace original hardcoded network architectures across all benchmark categories.
|
docs/CROSSQ_TRACKER.md
ADDED
|
@@ -0,0 +1,292 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CrossQ Benchmark Tracker
|
| 2 |
+
|
| 3 |
+
Operational tracker for CrossQ benchmark runs. Updated by agent team.
|
| 4 |
+
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
## Run Status
|
| 8 |
+
|
| 9 |
+
### Wave 0 — Improvement Runs (COMPLETED, intake deferred)
|
| 10 |
+
|
| 11 |
+
| Run Name | Env | Score (MA) | Old Score | Status | Spec Name | Intake |
|
| 12 |
+
|----------|-----|-----------|-----------|--------|-----------|--------|
|
| 13 |
+
| crossq-acrobot-v2 | Acrobot-v1 | -98.63 | -108.18 | ✅ solved | crossq_acrobot | ⬜ needs pull+plot |
|
| 14 |
+
| crossq-hopper-v8 | Hopper-v5 | 1295.21 | 1158.89 | ⚠️ improved | crossq_hopper | ⬜ needs pull+plot |
|
| 15 |
+
| crossq-swimmer-v7 | Swimmer-v5 | 221.12 | 75.72 | ✅ solved | crossq_swimmer | ⬜ needs pull+plot |
|
| 16 |
+
| crossq-invpend-v7 | InvertedPendulum-v5 | 841.87 | 830.36 | ⚠️ marginal | crossq_inverted_pendulum | ⬜ needs pull+plot |
|
| 17 |
+
| crossq-invdoubpend-v7 | InvertedDoublePendulum-v5 | 4514.25 | 4952.63 | ❌ worse, keep old | crossq_inverted_double_pendulum | ⬜ skip |
|
| 18 |
+
|
| 19 |
+
### Wave 1 — LayerNorm Experiments (COMPLETED)
|
| 20 |
+
|
| 21 |
+
| Run Name | Env | Frames | Score | Spec Name | Notes |
|
| 22 |
+
|----------|-----|--------|-------|-----------|-------|
|
| 23 |
+
| crossq-humanoid-v2 | Humanoid-v5 | 3M | **2429.88** | crossq_humanoid | iter=4, 5.5h — VIOLATES 3h |
|
| 24 |
+
| crossq-hopper-ln-v2 | Hopper-v5 | 3M | **1076.76** | crossq_hopper_ln | LN +2% vs baseline |
|
| 25 |
+
| crossq-swimmer-ln-v2 | Swimmer-v5 | 3M | **22.90** | crossq_swimmer_ln | LN KILLED (-97%) |
|
| 26 |
+
| crossq-humanoid-ln-v2 | Humanoid-v5 | 2M | **506.65** | crossq_humanoid_ln | LN +19%, needs more frames |
|
| 27 |
+
|
| 28 |
+
### Wave 3 — Data Over Gradients (STOPPED — humanoid-ln-7m iter=1 inferior to iter=2)
|
| 29 |
+
|
| 30 |
+
| Run Name | Env | Frames | Score (at kill) | Spec Name | Notes |
|
| 31 |
+
|----------|-----|--------|----------------|-----------|-------|
|
| 32 |
+
| crossq-humanoid-ln-7m | Humanoid-v5 | 7M | 706 (at 70%) | crossq_humanoid_ln | Stopped — iter=2 reached 1850 |
|
| 33 |
+
|
| 34 |
+
### Wave 2 — Full LN Sweep (RUNNING, just launched)
|
| 35 |
+
|
| 36 |
+
| Run Name | Env | Frames | Spec Name | Notes |
|
| 37 |
+
|----------|-----|--------|-----------|-------|
|
| 38 |
+
| crossq-walker-ln | Walker2d-v5 | 3M | crossq_walker2d_ln | **3890** — LN +22%! Near SAC 3900 |
|
| 39 |
+
| crossq-halfcheetah-ln | HalfCheetah-v5 | 3M | crossq_halfcheetah_ln | **6596** — LN -18% vs 8085 |
|
| 40 |
+
| crossq-ant-ln | Ant-v5 | 3M | crossq_ant_ln | **3706** — LN -5% vs 4046 |
|
| 41 |
+
| crossq-invpend-ln | InvertedPendulum-v5 | 3M | crossq_inverted_pendulum_ln | **731** — LN -13% vs 842 |
|
| 42 |
+
| crossq-invdoubpend-ln | InvertedDoublePendulum-v5 | 3M | crossq_inverted_double_pendulum_ln | **2727** — LN -45% vs 4953 |
|
| 43 |
+
| crossq-cartpole-ln | CartPole-v1 | 300K | crossq_cartpole_ln | **418** — LN +38%! |
|
| 44 |
+
| crossq-lunar-ln | LunarLander-v3 | 300K | crossq_lunar_ln | **126** — LN -19% vs 136 |
|
| 45 |
+
|
| 46 |
+
### Wave 4 — Extended-Frame LN (COMPLETED)
|
| 47 |
+
|
| 48 |
+
| Run Name | Env | Frames | Score (MA) | Spec Name | Notes |
|
| 49 |
+
|----------|-----|--------|-----------|-----------|-------|
|
| 50 |
+
| crossq-walker-ln-7m-v2 | Walker2d-v5 | 7M | **4277.15** | crossq_walker2d_ln_7m | ✅ BEATS SAC 3900! +10% |
|
| 51 |
+
| crossq-halfcheetah-ln-7m-v2 | HalfCheetah-v5 | 6M | **8784.55** | crossq_halfcheetah_ln_7m | +9% vs non-LN 8085, -10% SAC |
|
| 52 |
+
| crossq-ant-ln-7m-v2 | Ant-v5 | 6M | **5108.47** | crossq_ant_ln_7m | ✅ BEATS SAC 4844! +5% |
|
| 53 |
+
| crossq-hopper-ln-7m | Hopper-v5 | 6M | 1182 (at kill) | crossq_hopper_ln_7m | Stopped — LN hurts Hopper |
|
| 54 |
+
| crossq-walker-ln-i2 | Walker2d-v5 | 3.5M | 3766 (at kill) | crossq_walker2d_ln_i2 | Stopped — 7m run is better |
|
| 55 |
+
| crossq-invdoubpend-ln-7m | InvertedDoublePendulum-v5 | 7M | 5796 (at kill) | crossq_inverted_double_pendulum_ln_7m | Stopped — iter=2 much better |
|
| 56 |
+
|
| 57 |
+
### Wave 5 — iter=2 Gradient Density (COMPLETED)
|
| 58 |
+
|
| 59 |
+
| Run Name | Env | Frames | Score (MA) | Spec Name | Notes |
|
| 60 |
+
|----------|-----|--------|-----------|-----------|-------|
|
| 61 |
+
| crossq-humanoid-ln-i2-v2 | Humanoid-v5 | 3.5M | **1850.44** | crossq_humanoid_ln_i2 | +265% vs old 507! -29% SAC |
|
| 62 |
+
| crossq-invdoubpend-ln-i2-v2 | InvertedDoublePendulum-v5 | 3.5M | **7352.82** | crossq_inverted_double_pendulum_ln_i2 | +48% vs old 4953! -21% SAC |
|
| 63 |
+
|
| 64 |
+
### Wave 6 — WeightNorm Actor (COMPLETED)
|
| 65 |
+
|
| 66 |
+
| Run Name | Env | Frames | Score (MA) | Spec Name | Notes |
|
| 67 |
+
|----------|-----|--------|-----------|-----------|-------|
|
| 68 |
+
| crossq-humanoid-wn-v2 | Humanoid-v5 | 7M | **1681.45** | crossq_humanoid_wn | Strong but LN-i2 (1850) better |
|
| 69 |
+
| crossq-swimmer-wn-v2 | Swimmer-v5 | 6M | **165.49** | crossq_swimmer_wn | ❌ Regressed vs non-LN 221 (high variance) |
|
| 70 |
+
| crossq-hopper-wn | Hopper-v5 | 6M | 1097 (at kill) | crossq_hopper_wn | Stopped — not improving |
|
| 71 |
+
| crossq-walker-wn | Walker2d-v5 | 7M | 4124 (at kill) | crossq_walker2d_wn | Stopped — LN-7m better |
|
| 72 |
+
|
| 73 |
+
### Wave 7 — Next Improvement Runs (COMPLETED)
|
| 74 |
+
|
| 75 |
+
| Run Name | Env | Frames | Score (MA) | Spec Name | Notes |
|
| 76 |
+
|----------|-----|--------|-----------|-----------|-------|
|
| 77 |
+
| crossq-humanoidstandup-ln-i2 | HumanoidStandup-v5 | 3.5M | **150583.47** | crossq_humanoid_standup_ln_i2 | BEATS SAC 138222 (+9%)! LN + iter=2 + [1024,1024] |
|
| 78 |
+
| crossq-halfcheetah-ln-8m | HalfCheetah-v5 | 7.5M | **9969.18** | crossq_halfcheetah_ln_8m | BEATS SAC 9815 (+2%)! LN + iter=1, extended frames |
|
| 79 |
+
| crossq-hopper-i2 | Hopper-v5 | 3.5M | — | crossq_hopper_i2 | STOPPED — 101fps (9.6h), way over budget |
|
| 80 |
+
| crossq-invpend-7m | InvertedPendulum-v5 | 7M | — | crossq_inverted_pendulum_7m | Plain + iter=1, ~2.8h at 700fps |
|
| 81 |
+
|
| 82 |
+
### Wave 8 — v2 Final Runs (COMPLETED)
|
| 83 |
+
|
| 84 |
+
| Run Name | Env | Frames | Score (MA) | Spec Name | Notes |
|
| 85 |
+
|----------|-----|--------|-----------|-----------|-------|
|
| 86 |
+
| crossq-humanoidstandup-v2 | HumanoidStandup-v5 | 2M | **154162.28** | crossq_humanoid_standup_v2 | ✅ BEATS SAC +12%! LN iter=2, fewer frames |
|
| 87 |
+
| crossq-idp-v2 | InvertedDoublePendulum-v5 | 2M | **8255.82** | crossq_inverted_double_pendulum_v2 | ⚠️ Gap -9% vs SAC (was -21%). LN iter=2 |
|
| 88 |
+
| crossq-walker-v2 | Walker2d-v5 | 4M | **4162.65** | crossq_walker2d_v2 | Near old 4277, beats SAC +33%. LN iter=1 |
|
| 89 |
+
| crossq-humanoid-v2 | Humanoid-v5 | 4M | 1435.28 | crossq_humanoid_v2 | Below old 1850, high variance. LN iter=2 |
|
| 90 |
+
| crossq-hopper-v2 | Hopper-v5 | 3M | 1150.08 | crossq_hopper_v2 | Below old 1295. iter=2 didn't help |
|
| 91 |
+
| crossq-ip-v3 | InvertedPendulum-v5 | 3M | 779.68 | crossq_inverted_pendulum_v2 | Below old 842. Seed variance |
|
| 92 |
+
| crossq-swimmer-v2 | Swimmer-v5 | 3M | 144.52 | crossq_swimmer_v2 | ❌ iter=2 disaster (was 221). Keep old |
|
| 93 |
+
|
| 94 |
+
---
|
| 95 |
+
|
| 96 |
+
## Scorecard — CrossQ vs SAC/PPO
|
| 97 |
+
|
| 98 |
+
### Phase 1: Classic Control
|
| 99 |
+
|
| 100 |
+
| Env | CrossQ | Best Other | Gap | LN Run? |
|
| 101 |
+
|-----|--------|-----------|-----|---------|
|
| 102 |
+
| CartPole-v1 | **418** (LN) | 464 (SAC) | -10% | ✅ LN helps |
|
| 103 |
+
| Acrobot-v1 | -98.63 | -84.77 (SAC) | close | ✅ solved |
|
| 104 |
+
| LunarLander-v3 | 136.25 | 194 (PPO) | -30% | crossq-lunar-ln |
|
| 105 |
+
| Pendulum-v1 | -163.52 | -168 (SAC) | ✅ beats | done |
|
| 106 |
+
|
| 107 |
+
### Phase 2: Box2D
|
| 108 |
+
|
| 109 |
+
| Env | CrossQ | Best Other | Gap | LN Run? |
|
| 110 |
+
|-----|--------|-----------|-----|---------|
|
| 111 |
+
| LunarLanderContinuous-v3 | 249.85 | 132 (PPO) | ✅ beats | done |
|
| 112 |
+
|
| 113 |
+
### Phase 3: MuJoCo
|
| 114 |
+
|
| 115 |
+
| Env | CrossQ | Best Other | Gap | LN Run? |
|
| 116 |
+
|-----|--------|-----------|-----|---------|
|
| 117 |
+
| HalfCheetah-v5 | **9969** (LN 8M) | 9815 (SAC) | **✅ +2%** | BEATS SAC! |
|
| 118 |
+
| Hopper-v5 | 1295 | 1654 (PPO) | -22% | LN/WN both worse, keep baseline |
|
| 119 |
+
| Walker2d-v5 | **4277** (LN 7M) | 3900 (SAC) | **✅ +10%** | BEATS SAC! |
|
| 120 |
+
| Ant-v5 | **5108** (LN 6M) | 4844 (SAC) | **✅ +5%** | BEATS SAC! |
|
| 121 |
+
| Humanoid-v5 | **1850** (LN i2) | 2601 (SAC) | **-29%** | Huge improvement from 507 |
|
| 122 |
+
| HumanoidStandup-v5 | **154162** (LN i2 2M) | 138222 (SAC) | **✅ +12%** | BEATS SAC! v2 |
|
| 123 |
+
| InvertedPendulum-v5 | 842 | 1000 (SAC) | -16% | LN hurts, keep baseline |
|
| 124 |
+
| InvertedDoublePendulum-v5 | **8256** (LN i2 2M) | 9033 (SAC) | **-9%** | v2 improved from -21% |
|
| 125 |
+
| Reacher-v5 | -5.66 | -5.87 (SAC) | ✅ beats | done |
|
| 126 |
+
| Pusher-v5 | -37.08 | -38.41 (SAC) | ✅ beats | done |
|
| 127 |
+
| Swimmer-v5 | 221 | 301 (SAC) | -27% | WN regressed (165), keep baseline |
|
| 128 |
+
|
| 129 |
+
### Phase 4: Atari (PARKED — needs investigation before graduation)
|
| 130 |
+
|
| 131 |
+
Tested: Breakout, MsPacman, Pong, Qbert, Seaquest, SpaceInvaders
|
| 132 |
+
|
| 133 |
+
**Status**: Parked. Audit found issues — investigate CrossQ Atari performance before graduating.
|
| 134 |
+
Atari CrossQ generally underperforms SAC. Investigate whether BRN warmup, lr tuning, or
|
| 135 |
+
ConvNet-specific changes could help before publishing results.
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
## Intake Checklist (per run)
|
| 140 |
+
|
| 141 |
+
1. ⬜ Extract score: `dstack logs NAME | grep trial_metrics` → total_reward_ma
|
| 142 |
+
2. ⬜ Find HF folder: `huggingface_hub` API query
|
| 143 |
+
3. ⬜ Pull data: `slm-lab pull SPEC_NAME`
|
| 144 |
+
4. ⬜ Update BENCHMARKS.md: score + HF link + status
|
| 145 |
+
5. ⬜ Regenerate plot: `slm-lab plot -t "ENV_NAME" -f FOLDER1,FOLDER2,...`
|
| 146 |
+
6. ⬜ Commit + push
|
| 147 |
+
|
| 148 |
+
---
|
| 149 |
+
|
| 150 |
+
## Pending Fixes
|
| 151 |
+
|
| 152 |
+
- [x] Regenerate LunarLander plots with correct env name titles (564a6a96)
|
| 153 |
+
- [x] Universal env name audit across all plots (564a6a96)
|
| 154 |
+
- [x] Delete 58 stale Atari plots without -v5 suffix (564a6a96)
|
| 155 |
+
- [ ] Wave 0 intake: pull HF data + regenerate plots (deferred — low bandwidth)
|
| 156 |
+
|
| 157 |
+
## Decision Log
|
| 158 |
+
|
| 159 |
+
- **Swimmer-LN FAILED** (22.90 final): LN hurts Swimmer. Non-LN 221.12 is best. Do NOT launch more Swimmer-LN runs.
|
| 160 |
+
- **Hopper-LN 3M** (1076): WORSE than non-LN 6M (1295). More frames > LN for Hopper. Extended 6-7M LN run will tell if both helps.
|
| 161 |
+
- **LN HURTS most envs at 3M**: HalfCheetah -18%, InvPend -13%, InvDoublePend -45%, Swimmer -97%. Only helps Humanoid (+19%).
|
| 162 |
+
- **Root cause**: Critic BRN already normalizes. Actor LN over-regularizes, squashing activation scale on low/med-dim obs.
|
| 163 |
+
- **WeightNorm hypothesis**: WN reparameterizes weights without squashing activations — should avoid LN's failure. Wave 6 testing.
|
| 164 |
+
- **Humanoid-v2 iter=4**: MA 2923 at best session, likely beats SAC 2601. But uses iter=4 → ~150fps → 5.5h. VIOLATES 3h constraint. Not a valid CrossQ result.
|
| 165 |
+
- **Humanoid-LN 2M**: 506.65. iter=1 is fast (700fps) but 2M not enough data. Launched 7M run (2.8h budget).
|
| 166 |
+
- **Frame budget rule**: CrossQ at 700fps can do 7.5M in 3h. Use more frames than SAC, less than PPO.
|
| 167 |
+
- **InvDoublePend log_alpha_max=2.0**: Failed (4514 vs old 4953). Default alpha cap better for this env.
|
| 168 |
+
- **CRITICAL: LN + extended frames REVERSES 3M findings** — LN at 3M hurt most envs, but at 5-6M it BEATS non-LN baselines:
|
| 169 |
+
- HalfCheetah-LN: -18% at 3M → **+8% at 5M** (8722 vs 8085). LN needs warmup frames.
|
| 170 |
+
- Ant-LN: -5% at 3M → **+25% at 5M** (5054 vs 4046).
|
| 171 |
+
- InvDoublePend-LN: -45% at 3M → **+17% at 5M** (5796 vs 4953).
|
| 172 |
+
- Walker-LN: was already +22% at 3M, reached **4397** at 5.16M (74%) — beating SAC 3900.
|
| 173 |
+
- **iter=2 is the killer config for InvDoublePend**: 7411 at 69% completion, 50% above baseline, approaching SAC 9359.
|
| 174 |
+
- **WN promising**: Swimmer-WN 255 > non-LN 221. Walker-WN 4124 strong. Need full runs to confirm.
|
| 175 |
+
- **RunPod batch eviction**: All 13 runs killed at 01:25 UTC. Root cause: dstack credits depleted.
|
| 176 |
+
- **Strategic triage**: After relaunch, stopped 6 redundant/underperforming runs, kept 7 promising:
|
| 177 |
+
- KEPT: walker-ln-7m (beating SAC), ant-ln-7m (beating SAC), halfcheetah-ln-7m (closing gap), invdoubpend-ln-i2 (iter=2 best), swimmer-wn (WN solving), humanoid-ln-i2 (best Humanoid), humanoid-wn (alternative)
|
| 178 |
+
- STOPPED: hopper-ln-7m (LN hurts), hopper-wn (flat), walker-ln-i2 (7m better), walker-wn (7m better), invdoubpend-ln-7m (i2 much better), humanoid-ln-7m (i2 better)
|
| 179 |
+
- **FINAL RESULTS (7 runs completed)**:
|
| 180 |
+
- Walker-LN-7m: **4277** — BEATS SAC 3900 (+10%)
|
| 181 |
+
- Ant-LN-7m: **5108** — BEATS SAC 4844 (+5%)
|
| 182 |
+
- HalfCheetah-LN-7m: **8785** — gap narrowed from -17% to -10%
|
| 183 |
+
- InvDoublePend-LN-i2: **7353** — gap narrowed from -47% to -21%
|
| 184 |
+
- Humanoid-LN-i2: **1850** — massive improvement from 507 (-29% vs SAC)
|
| 185 |
+
- Humanoid-WN: **1681** — strong but LN-i2 wins
|
| 186 |
+
- Swimmer-WN: **165** — REGRESSED from 221 (high variance, consistency=-0.79). WN does NOT fix Swimmer.
|
| 187 |
+
- **LN + extended frames confirmed**: The universal recipe is LN actor + more frames. Works for 5/7 MuJoCo envs. Exceptions: Hopper (LN hurts regardless), Swimmer (LN kills, WN also fails at full run).
|
| 188 |
+
- **Swimmer paradox**: WN looked promising at 67% (MA 255) but regressed to 165 at completion. High session variance. Non-LN 221 remains best.
|
| 189 |
+
- **Humanoid strategy**: LN+iter=2 (1850) > WN (1681) > LN+iter=1 7M (706). Humanoid needs gradient density, not just data.
|
| 190 |
+
- **Hopper-i2 too slow**: 101fps with iter=2 [512,512], would take 9.6h. Stopped. Plain baseline at 1295 with 5M/iter=1 (700fps) is best. Hopper is CrossQ's weakest MuJoCo env — 22% below PPO 1654, no normalization variant helps.
|
| 191 |
+
- **Wave 7 launched**: HumanoidStandup-LN-i2 (353fps, early MA 106870 vs baseline 115730), HalfCheetah-LN-8m (708fps), InvPend-7m (plain, more data).
|
| 192 |
+
|
| 193 |
+
## Atari Investigation
|
| 194 |
+
|
| 195 |
+
### Current CrossQ vs SAC Atari Scores
|
| 196 |
+
|
| 197 |
+
| Game | CrossQ | SAC | Ratio | Verdict |
|
| 198 |
+
|------|--------|-----|-------|---------|
|
| 199 |
+
| Breakout | 0.91 | 20.23 | 4.5% | catastrophic |
|
| 200 |
+
| MsPacman | 238.51 | 1336.96 | 17.8% | catastrophic |
|
| 201 |
+
| Pong | -20.82 | 10.89 | no learning | catastrophic |
|
| 202 |
+
| Qbert | **4268.66** | 3331.98 | 128% | **CrossQ wins** |
|
| 203 |
+
| Seaquest | 216.19 | 1565.44 | 13.8% | catastrophic |
|
| 204 |
+
| SpaceInvaders | 360.37 | 507.33 | 71% | poor |
|
| 205 |
+
|
| 206 |
+
CrossQ wins 1/6 games (Qbert). The other 5 show near-total failure, with 3 games at <18% of SAC performance.
|
| 207 |
+
|
| 208 |
+
### Root Cause Analysis
|
| 209 |
+
|
| 210 |
+
**Primary hypothesis: BRN placement is wrong for ConvNets.**
|
| 211 |
+
|
| 212 |
+
The CrossQ Atari critic architecture places a single `LazyBatchRenorm1d` layer after the final FC layer (post-Flatten, post-Linear(512)). This is fundamentally different from the MuJoCo architecture where BRN layers are placed between *every* hidden FC layer (two BRN layers for [256,256], two for [512,512], etc.).
|
| 213 |
+
|
| 214 |
+
Atari critic (1 BRN layer):
|
| 215 |
+
```
|
| 216 |
+
Conv2d(32) -> ReLU -> Conv2d(64) -> ReLU -> Conv2d(64) -> ReLU -> Flatten -> Linear(512) -> BRN -> ReLU
|
| 217 |
+
```
|
| 218 |
+
|
| 219 |
+
MuJoCo critic (2 BRN layers):
|
| 220 |
+
```
|
| 221 |
+
Linear(W) -> BRN -> ReLU -> Linear(W) -> BRN -> ReLU
|
| 222 |
+
```
|
| 223 |
+
|
| 224 |
+
The CrossQ paper's core insight is that BN/BRN statistics sharing between current and next-state batches replaces target networks. With only one BRN layer after 512-dim features, the normalization may be insufficient — the ConvNet backbone (3 conv layers) processes current and next-state images with NO shared normalization. The BRN only operates on the final FC representation. This means the cross-batch statistics sharing that eliminates the need for target networks is weak.
|
| 225 |
+
|
| 226 |
+
**Secondary hypothesis: Hyperparameters ported directly from MuJoCo without ConvNet adaptation.**
|
| 227 |
+
|
| 228 |
+
Key differences between CrossQ Atari vs SAC Atari specs:
|
| 229 |
+
|
| 230 |
+
| Parameter | CrossQ Atari | SAC Atari | Issue |
|
| 231 |
+
|-----------|-------------|-----------|-------|
|
| 232 |
+
| lr | 1e-3 | 3e-4 | 3.3x higher — too aggressive for ConvNets |
|
| 233 |
+
| optimizer | Adam | AdamW | No weight decay in CrossQ |
|
| 234 |
+
| betas | [0.5, 0.999] | [0.9, 0.999] | Low beta1 for ConvNets is risky |
|
| 235 |
+
| clip_grad_val | 0.5 | 0.5 | same |
|
| 236 |
+
| loss | SmoothL1Loss | SmoothL1Loss | same |
|
| 237 |
+
| policy_delay | 3 | 1 (default) | Delays policy updates 3x |
|
| 238 |
+
| log_alpha_max | 0.5 | none (uses clamp [-5, 2]) | Tighter alpha cap |
|
| 239 |
+
| warmup_steps | 10000 | n/a | Only 10K for Atari |
|
| 240 |
+
| target networks | none | polyak 0.005 | CrossQ core difference |
|
| 241 |
+
| init_fn | orthogonal_ | orthogonal_ | same |
|
| 242 |
+
|
| 243 |
+
The `lr=1e-3` with `betas=[0.5, 0.999]` combination is specifically tuned for MuJoCo MLPs per the CrossQ paper. ConvNets are known to be more sensitive to learning rates — SAC Atari uses `lr=3e-4` which is standard for Atari. The low `beta1=0.5` reduces momentum, which may cause unstable gradient updates in ConvNets where feature maps evolve slowly.
|
| 244 |
+
|
| 245 |
+
**Tertiary hypothesis: BRN warmup_steps=10000 is too low for Atari.**
|
| 246 |
+
|
| 247 |
+
At `training_frequency=4` and `num_envs=16`, each training step consumes 64 frames. With `training_iter=3`, there are 3 gradient steps per training step. So 10K warmup means 10K BRN steps = 10K/3 = ~3333 training steps = ~213K frames (10.7% of 2M). During warmup, BRN behaves as standard BN (r_max=1, d_max=0), which has been shown to cause divergence in RL (see CrossQ standard BN results in MEMORY.md).
|
| 248 |
+
|
| 249 |
+
MuJoCo uses `warmup_steps=100000` = 100K BRN steps. At `training_frequency=1` and `num_envs=16`, that's ~1.6M frames (significant fraction of typical 3-7M runs). This much slower warmup gives the running statistics time to stabilize. Atari at 10K warmup transitions to full BRN correction far too early when running statistics are still poor.
|
| 250 |
+
|
| 251 |
+
**Fourth hypothesis: Cross-batch forward is ineffective for ConvNets.**
|
| 252 |
+
|
| 253 |
+
In `calc_q_cross_discrete`, states and next_states are concatenated and passed through the critic together. For MuJoCo (small state vectors), this is cheap and effective — BN statistics computed over both batches provide good normalization. For Atari (84x84x4 images), the concatenated batch goes through 3 conv layers with NO normalization, then hits a single BRN layer at dim=512. The conv layers see a batch that mixes current and next frames, but without BN in the conv layers, this mixing provides no cross-batch regularization benefit. The entire CrossQ mechanism reduces to "BRN on the last FC layer of a frozen ConvNet backbone."
|
| 254 |
+
|
| 255 |
+
### Proposed Fixes (Priority Order)
|
| 256 |
+
|
| 257 |
+
**P0: Lower learning rate to SAC-Atari defaults**
|
| 258 |
+
- Change `lr: 1e-3` to `lr: 3e-4` for both actor and critic
|
| 259 |
+
- Change `betas: [0.5, 0.999]` to default `[0.9, 0.999]`
|
| 260 |
+
- Rationale: The lr=1e-3/beta1=0.5 combo is CrossQ-paper MuJoCo-specific. ConvNets need conservative lr.
|
| 261 |
+
|
| 262 |
+
**P1: Increase BRN warmup to 100K steps**
|
| 263 |
+
- Change `warmup_steps: 10000` to `warmup_steps: 100000`
|
| 264 |
+
- Rationale: Match MuJoCo proportionally. 100K BRN steps at iter=3 = ~2.1M frames, which is the full run. This means BRN stays in near-standard-BN mode for most of training — essentially disabling the full BRN correction that may be destabilizing ConvNets.
|
| 265 |
+
|
| 266 |
+
**P2: Add BRN after each conv layer (deeper cross-batch normalization)**
|
| 267 |
+
- Place `LazyBatchRenorm1d` (or `BatchRenorm2d`, which would need implementation) after each Conv2d layer
|
| 268 |
+
- Rationale: The CrossQ paper's mechanism relies on shared BN statistics between current/next batches. With BRN only at the FC layer, the ConvNet backbone has no cross-batch normalization, defeating the purpose.
|
| 269 |
+
- Note: This requires implementing `BatchRenorm2d` (2D spatial variant). Standard `BatchNorm2d` normalizes per-channel across spatial dims — a `BatchRenorm2d` would do the same with correction factors.
|
| 270 |
+
- **Risk**: This is a code change, not a spec-only fix. Higher effort.
|
| 271 |
+
|
| 272 |
+
**P3: Remove policy_delay for Atari**
|
| 273 |
+
- Change `policy_delay: 3` to `policy_delay: 1`
|
| 274 |
+
- Rationale: SAC Atari uses no policy delay. With only 2M frames and iter=3, policy_delay=3 means the policy is updated once every 3 critic updates. Combined with the already-low frame budget, the policy may not get enough gradient updates to learn.
|
| 275 |
+
- Total policy updates at 2M frames: (2M / (4 * 16)) * 3 / 3 = 31,250. Without delay: 93,750. 3x more policy updates.
|
| 276 |
+
|
| 277 |
+
**P4: Switch to AdamW with weight decay**
|
| 278 |
+
- Match SAC Atari's `AdamW` with `eps: 0.0001`
|
| 279 |
+
- Rationale: Weight decay provides implicit regularization that may partially compensate for the missing target network smoothing.
|
| 280 |
+
|
| 281 |
+
### Experiment Plan
|
| 282 |
+
|
| 283 |
+
1. **Exp A** (spec-only, highest impact): lr=3e-4, betas=[0.9,0.999], warmup=100K, policy_delay=1. Test on Pong + Breakout (fast signal games).
|
| 284 |
+
2. **Exp B** (spec-only): Same as A but keep policy_delay=3. Isolates lr/warmup effect.
|
| 285 |
+
3. **Exp C** (spec-only): Same as A but lr=1e-3 (keep CrossQ lr). Isolates beta/warmup effect.
|
| 286 |
+
4. **Exp D** (code change): Add BatchRenorm2d after conv layers. Test with Exp A settings.
|
| 287 |
+
|
| 288 |
+
If Exp A solves the problem, no code changes needed. If not, Exp D addresses the fundamental architectural mismatch.
|
| 289 |
+
|
| 290 |
+
### Key Insight
|
| 291 |
+
|
| 292 |
+
The Qbert success is telling. Qbert has relatively simple visual patterns and discrete state changes — the ConvNet can extract good features even with aggressive lr. Games like Pong and Breakout require precise spatial reasoning where ConvNet feature quality matters more, and the aggressive lr/low-momentum combo destabilizes learning before features mature.
|
docs/IMPROVEMENTS_ROADMAP.md
ADDED
|
@@ -0,0 +1,226 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SLM Lab Improvements Roadmap
|
| 2 |
+
|
| 3 |
+
SLM Lab's algorithms (PPO, SAC) are architecturally sound but use 2017-era defaults. This roadmap integrates material advances from the post-PPO RL landscape.
|
| 4 |
+
|
| 5 |
+
**Source**: [`notes/literature/ai/rl-landscape-2026.md`](../../notes/literature/ai/rl-landscape-2026.md)
|
| 6 |
+
|
| 7 |
+
**Hardware**: Mac (Apple Silicon) for dev, cloud GPU (A100/H100) for runs.
|
| 8 |
+
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
## Status
|
| 12 |
+
|
| 13 |
+
| Step | What | Status |
|
| 14 |
+
|:---:|------|--------|
|
| 15 |
+
| **1** | **GPU envs (MuJoCo Playground)** | **NEXT** |
|
| 16 |
+
| 2 | Normalization stack (layer norm, percentile) | DONE |
|
| 17 |
+
| 3 | CrossQ algorithm (batch norm critics) | DONE |
|
| 18 |
+
| 4 | Combine + full benchmark suite | TODO (after Step 1) |
|
| 19 |
+
| 5 | High-UTD SAC / RLPD | TODO |
|
| 20 |
+
| 6 | Pretrained vision encoders | TODO |
|
| 21 |
+
|
| 22 |
+
---
|
| 23 |
+
|
| 24 |
+
## NEXT: Step 1 — GPU Envs (MuJoCo Playground)
|
| 25 |
+
|
| 26 |
+
**Goal**: Remove env as the bottleneck. Run physics on GPU via [MuJoCo Playground](https://github.com/google-deepmind/mujoco_playground), keep training in PyTorch. Scale to 1000+ parallel envs for large-scale runs.
|
| 27 |
+
|
| 28 |
+
### The Stack
|
| 29 |
+
|
| 30 |
+
```
|
| 31 |
+
MuJoCo Playground ← env definitions, registry, wrappers
|
| 32 |
+
↓
|
| 33 |
+
Brax ← EpisodeWrapper, AutoResetWrapper
|
| 34 |
+
↓
|
| 35 |
+
MuJoCo MJX ← JAX reimplementation of MuJoCo physics (GPU/TPU)
|
| 36 |
+
↓
|
| 37 |
+
JAX / XLA ← jit, vmap
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
### API Difference
|
| 41 |
+
|
| 42 |
+
Playground uses a **stateless functional API**, not Gymnasium OOP:
|
| 43 |
+
|
| 44 |
+
```python
|
| 45 |
+
# Gymnasium (today) # Playground
|
| 46 |
+
env = gym.make("HalfCheetah-v5") env = registry.load("CheetahRun")
|
| 47 |
+
obs, info = env.reset() state = env.reset(rng) # → State dataclass
|
| 48 |
+
obs, rew, term, trunc, info = env.step(a) state = env.step(state, a) # → new State
|
| 49 |
+
```
|
| 50 |
+
|
| 51 |
+
Key differences: functional (state passed explicitly), `jax.vmap` for batching (not `VectorEnv`), `jax.jit` for GPU compilation, single `done` flag (no term/trunc split), `observation_size`/`action_size` ints (no `gym.spaces`).
|
| 52 |
+
|
| 53 |
+
### Environment Catalog
|
| 54 |
+
|
| 55 |
+
**DM Control Suite (25 envs)** — standard RL benchmarks, but dm_control versions (different obs/reward/termination from Gymnasium MuJoCo):
|
| 56 |
+
|
| 57 |
+
| Playground | Nearest Gymnasium | Notes |
|
| 58 |
+
|-----------|-------------------|-------|
|
| 59 |
+
| `CheetahRun` | `HalfCheetah-v5` | Tolerance reward (target speed=10) |
|
| 60 |
+
| `HopperHop` / `HopperStand` | `Hopper-v5` | Different reward |
|
| 61 |
+
| `WalkerWalk` / `WalkerRun` | `Walker2d-v5` | dm_control version |
|
| 62 |
+
| `HumanoidWalk` / `HumanoidRun` | `Humanoid-v5` | CMU humanoid |
|
| 63 |
+
| `CartpoleSwingup` | `CartPole-v1` | Swing-up (harder) |
|
| 64 |
+
| `ReacherEasy/Hard`, `FingerSpin/Turn*`, `FishSwim`, `PendulumSwingup`, `SwimmerSwimmer6` | — | Various |
|
| 65 |
+
|
| 66 |
+
No Ant equivalent. Results NOT comparable across env suites.
|
| 67 |
+
|
| 68 |
+
**Locomotion (19 envs)** — real robots (Unitree Go1/G1/H1, Spot, etc.) with joystick control, gait tracking, recovery.
|
| 69 |
+
|
| 70 |
+
**Manipulation (10 envs)** — Aloha bimanual, Franka Panda, LEAP hand dexterity.
|
| 71 |
+
|
| 72 |
+
### Performance
|
| 73 |
+
|
| 74 |
+
Single-env MJX is ~10x slower than CPU MuJoCo. The win comes from massive parallelism:
|
| 75 |
+
|
| 76 |
+
| Hardware | Batch Size | Humanoid steps/sec |
|
| 77 |
+
|----------|-----------|-------------------|
|
| 78 |
+
| M3 Max (CPU) | ~128 | 650K |
|
| 79 |
+
| A100 (MJX) | 8,192 | 950K |
|
| 80 |
+
|
| 81 |
+
Training throughput on single A100: ~720K steps/sec (Cartpole PPO), ~91K steps/sec (Humanoid PPO). SAC 25-50x slower than PPO (off-policy overhead).
|
| 82 |
+
|
| 83 |
+
**Wall clock (1M frames)**: CPU ~80 min → GPU <5 min (PPO), ~30 min (SAC).
|
| 84 |
+
|
| 85 |
+
### Integration Design
|
| 86 |
+
|
| 87 |
+
Adapter at the env boundary. Algorithms unchanged.
|
| 88 |
+
|
| 89 |
+
```
|
| 90 |
+
Spec: env.backend = "playground", env.name = "CheetahRun", env.num_envs = 4096
|
| 91 |
+
↓
|
| 92 |
+
make_env() routes on backend
|
| 93 |
+
↓
|
| 94 |
+
PlaygroundVecEnv(VectorEnv) ← jit+vmap internally, DLPack zero-copy at boundary
|
| 95 |
+
↓
|
| 96 |
+
VectorClockWrapper → Session.run_rl() (existing, unchanged)
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
Reference implementations: Playground's [`wrapper_torch.py`](https://github.com/google-deepmind/mujoco_playground/blob/main/mujoco_playground/_src/wrapper_torch.py) (`RSLRLBraxWrapper`), [skrl](https://skrl.readthedocs.io/en/develop/api/envs/wrapping.html) Gymnasium-like wrapper.
|
| 100 |
+
|
| 101 |
+
### Changes
|
| 102 |
+
|
| 103 |
+
- `slm_lab/env/playground.py`: **New** — `PlaygroundVecEnv(VectorEnv)` adapter (JIT, vmap, DLPack, auto-reset, RNG management)
|
| 104 |
+
- `slm_lab/env/__init__.py`: `backend` routing in `make_env()`
|
| 105 |
+
- `pyproject.toml`: Optional `[playground]` dependency group (`mujoco-playground`, `jax[cuda12]`, `mujoco-mjx`, `brax`)
|
| 106 |
+
- Specs: New specs with `backend: playground`, Playground env names, `num_envs: 4096`
|
| 107 |
+
|
| 108 |
+
No changes to: algorithms, networks, memory, training loop, experiment control.
|
| 109 |
+
|
| 110 |
+
### Gotchas
|
| 111 |
+
|
| 112 |
+
1. **JIT startup**: First `reset()`/`step()` triggers XLA compilation (10-60s). One-time.
|
| 113 |
+
2. **Static shapes**: `num_envs` fixed at construction. Contacts padded to max possible.
|
| 114 |
+
3. **Ampere precision**: RTX 30/40 need `JAX_DEFAULT_MATMUL_PRECISION=highest` or training destabilizes.
|
| 115 |
+
4. **No Atari**: Playground is physics-only. Atari stays on CPU Gymnasium.
|
| 116 |
+
|
| 117 |
+
### Verify
|
| 118 |
+
|
| 119 |
+
PPO on CheetahRun — same reward as CPU baseline, 100x+ faster wall clock (4096 envs, A100).
|
| 120 |
+
|
| 121 |
+
### Migration Path
|
| 122 |
+
|
| 123 |
+
1. **Phase 1** (this step): Adapter + DM Control locomotion (CheetahRun, HopperHop, WalkerWalk, HumanoidWalk/Run)
|
| 124 |
+
2. **Phase 2**: Robotics envs (Unitree Go1/G1, Spot, Franka Panda, LEAP hand)
|
| 125 |
+
3. **Phase 3**: Isaac Lab (same adapter pattern, PhysX backend, sim-to-real)
|
| 126 |
+
|
| 127 |
+
---
|
| 128 |
+
|
| 129 |
+
## TODO: Step 4 — Combine + Full Benchmark Suite
|
| 130 |
+
|
| 131 |
+
**Goal**: Run PPO v2 and CrossQ+norm on MuJoCo envs. Record wall-clock and final reward (mean ± std, 4 seeds). This is the "before/after" comparison for the roadmap.
|
| 132 |
+
|
| 133 |
+
**Runs to dispatch** (via dstack, see `docs/BENCHMARKS.md`):
|
| 134 |
+
|
| 135 |
+
| Algorithm | Env | Spec | Frames |
|
| 136 |
+
|-----------|-----|------|--------|
|
| 137 |
+
| PPO v2 | HalfCheetah-v5 | `ppo_mujoco_v2_arc.yaml` | 1M |
|
| 138 |
+
| PPO v2 | Humanoid-v5 | `ppo_mujoco_v2_arc.yaml` | 2M |
|
| 139 |
+
| SAC v2 | HalfCheetah-v5 | `sac_mujoco_v2_arc.yaml` | 1M |
|
| 140 |
+
| SAC v2 | Humanoid-v5 | `sac_mujoco_v2_arc.yaml` | 2M |
|
| 141 |
+
| CrossQ | HalfCheetah-v5 | `crossq_mujoco.yaml` | 4M |
|
| 142 |
+
| CrossQ | Humanoid-v5 | `crossq_mujoco.yaml` | 1M |
|
| 143 |
+
| CrossQ | Hopper-v5 | `crossq_mujoco.yaml` | 3M |
|
| 144 |
+
| CrossQ | Ant-v5 | `crossq_mujoco.yaml` | 2M |
|
| 145 |
+
|
| 146 |
+
**Verify**: Both algorithms beat their v1/SAC baselines on at least 2/3 envs.
|
| 147 |
+
|
| 148 |
+
**Local testing results (200k frames, 4 sessions)**:
|
| 149 |
+
- PPO v2 (layer norm + percentile) beats baseline on Humanoid (272.67 vs 246.83, consistency 0.78 vs 0.70)
|
| 150 |
+
- Layer norm is the most reliable individual feature — helps on LunarLander (+56%) and Humanoid (+8%)
|
| 151 |
+
- CrossQ beats SAC on CartPole (383 vs 238), Humanoid (365 vs 356), with higher consistency
|
| 152 |
+
- CrossQ unstable on Ant (loss divergence) — may need tuning for high-dimensional action spaces
|
| 153 |
+
|
| 154 |
+
---
|
| 155 |
+
|
| 156 |
+
## Completed: Step 2 — Normalization Stack
|
| 157 |
+
|
| 158 |
+
**v2 = layer_norm + percentile normalization** (symlog dropped — harms model-free RL).
|
| 159 |
+
|
| 160 |
+
Changes:
|
| 161 |
+
- `net_util.py` / `mlp.py`: `layer_norm` and `batch_norm` params in `build_fc_model()` / `MLPNet`
|
| 162 |
+
- `actor_critic.py`: `PercentileNormalizer` (EMA-tracked 5th/95th percentile advantage normalization)
|
| 163 |
+
- `math_util.py`: `symlog` / `symexp` (retained but excluded from v2 defaults)
|
| 164 |
+
- `ppo.py` / `sac.py`: symlog + percentile normalization integration
|
| 165 |
+
- Specs: `ppo_mujoco_v2_arc.yaml`, `sac_mujoco_v2_arc.yaml`
|
| 166 |
+
|
| 167 |
+
## Completed: Step 3 — CrossQ
|
| 168 |
+
|
| 169 |
+
CrossQ: SAC variant with no target networks. Uses cross-batch normalization on concatenated (s,a) and (s',a') batches.
|
| 170 |
+
|
| 171 |
+
Changes:
|
| 172 |
+
- `crossq.py`: CrossQ algorithm inheriting from SAC
|
| 173 |
+
- `algorithm/__init__.py`: CrossQ import
|
| 174 |
+
- Specs: `benchmark/crossq/crossq_mujoco.yaml`, `crossq_classic.yaml`, `crossq_box2d.yaml`, `crossq_atari.yaml`
|
| 175 |
+
- Future: actor LayerNorm (TorchArc YAML) — may help underperforming envs (Hopper, InvPendulum, Humanoid)
|
| 176 |
+
|
| 177 |
+
---
|
| 178 |
+
|
| 179 |
+
## TODO: Step 5 — High-UTD SAC / RLPD
|
| 180 |
+
|
| 181 |
+
**Goal**: `utd_ratio` alias for `training_iter`, demo buffer via `ReplayWithDemos` subclass.
|
| 182 |
+
|
| 183 |
+
Changes:
|
| 184 |
+
- `sac.py`: `utd_ratio` alias for `training_iter`
|
| 185 |
+
- `replay.py`: `ReplayWithDemos` subclass (50/50 symmetric sampling from demo and online data)
|
| 186 |
+
- Spec: `sac_mujoco_highutd_arc.yaml` (UTD=20 + layer norm critic)
|
| 187 |
+
|
| 188 |
+
**Verify**: High-UTD SAC on Hopper-v5 — converge in ~50% fewer env steps vs standard SAC.
|
| 189 |
+
|
| 190 |
+
## TODO: Step 6 — Pretrained Vision Encoders
|
| 191 |
+
|
| 192 |
+
**Goal**: DINOv2 encoder via torcharc, DrQ augmentation wrapper.
|
| 193 |
+
|
| 194 |
+
Changes:
|
| 195 |
+
- `pretrained.py`: `PretrainedEncoder` module (DINOv2, freeze/fine-tune, projection)
|
| 196 |
+
- `wrappers.py`: `RandomShiftWrapper` (DrQ-v2 ±4px shift augmentation)
|
| 197 |
+
- Spec: `ppo_vision_arc.yaml`
|
| 198 |
+
|
| 199 |
+
**Verify**: PPO with DINOv2 on DMControl pixel tasks (Walker Walk, Cartpole Swingup). Frozen vs fine-tuned comparison.
|
| 200 |
+
|
| 201 |
+
---
|
| 202 |
+
|
| 203 |
+
## Environment Plan (Future)
|
| 204 |
+
|
| 205 |
+
Three tiers of environment coverage:
|
| 206 |
+
|
| 207 |
+
| Tier | Platform | Purpose |
|
| 208 |
+
|------|----------|---------|
|
| 209 |
+
| **Broad/Basic** | [Gymnasium](https://gymnasium.farama.org/) | Standard RL benchmarks (CartPole, MuJoCo, Atari) |
|
| 210 |
+
| **Physics-rich** | [DeepMind MuJoCo Playground](https://github.com/google-deepmind/mujoco_playground) | GPU-accelerated locomotion, manipulation, dexterous tasks |
|
| 211 |
+
| **Sim-to-real** | [NVIDIA Isaac Lab](https://github.com/isaac-sim/IsaacLab) | GPU-accelerated, sim-to-real transfer, robot learning |
|
| 212 |
+
|
| 213 |
+
---
|
| 214 |
+
|
| 215 |
+
## Key Findings
|
| 216 |
+
|
| 217 |
+
- **Symlog harms model-free RL**: Tested across 6 envs — consistently hurts PPO and SAC. Designed for DreamerV3's world model, not direct value/Q-target compression.
|
| 218 |
+
- **Layer norm is the most reliable feature**: Helps on harder envs (LunarLander +56%, Humanoid +8%), neutral on simple envs.
|
| 219 |
+
- **CrossQ unstable on some envs**: Loss divergence on Ant and Hopper. Stable on CartPole, Humanoid. May need batch norm tuning for high-dimensional action spaces.
|
| 220 |
+
- **Features help more on harder envs**: Simple envs (CartPole, Acrobot) — baseline wins. Complex envs (Humanoid) — v2 and CrossQ win.
|
| 221 |
+
|
| 222 |
+
## Not in Scope
|
| 223 |
+
|
| 224 |
+
- World models (DreamerV3/RSSM) — Dasein Agent Phase 3.2c
|
| 225 |
+
- Plasticity loss mitigation (CReLU, periodic resets) — future work
|
| 226 |
+
- PPO+ principled fixes (ICLR 2025) — evaluate after base normalization stack
|
docs/plots/Acrobot-v1_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Ant-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Breakout-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/CartPole-v1_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/HalfCheetah-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Hopper-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Humanoid-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/HumanoidStandup-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/InvertedDoublePendulum-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/InvertedPendulum-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/LunarLander-v3_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/LunarLanderContinuous-v3_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/MsPacman-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Pendulum-v1_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Pong-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Pusher-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Qbert-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Reacher-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Seaquest-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/SpaceInvaders-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Swimmer-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Walker2d-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|