SLM-Lab Benchmarks
Reproducible deep RL algorithm validation across Gymnasium environments (Classic Control, Box2D, MuJoCo, Atari).
Usage
After installation, copy SPEC_FILE and SPEC_NAME from result tables below (Atari uses one shared spec file - see Phase 4).
Running Benchmarks
Local - runs on your machine (Classic Control: minutes):
slm-lab run SPEC_FILE SPEC_NAME train
Remote - cloud GPU via dstack, auto-syncs to HuggingFace:
source .env && slm-lab run-remote --gpu SPEC_FILE SPEC_NAME train -n NAME
Remote setup: cp .env.example .env then set HF_TOKEN. See README for dstack config.
Atari
All games share one spec file (57 tested, 6 hard exploration skipped). Use -s env=ENV to substitute. Runs take ~2-3 hours on GPU.
source .env && slm-lab run-remote --gpu -s env=ALE/Pong-v5 slm_lab/spec/benchmark_arc/ppo/ppo_atari_arc.yaml ppo_atari_arc train -n pong
Download Results
Trained models and metrics sync to HuggingFace. Pull locally:
source .env && slm-lab pull SPEC_NAME
slm-lab list # see available experiments
Benchmark Contribution
To ensure benchmark integrity, follow these steps when adding or updating results:
1. Audit Spec Settings
- Before Running: Ensure
spec.yamlmatches the Settings line defined in each benchmark table. - Example:
max_frame 3e5 | num_envs 4 | max_session 4 | log_frequency 500 - After Pulling: Verify the downloaded
spec.yamlmatches these rules before using the data.
2. Run Benchmark & Commit Specs
- Run: Execute the benchmark locally or remotely using the commands in Usage.
- Commit Specs: Always commit the
spec.yamlfile used for the run to the repo. - Table Entry: Ensure
BENCHMARKS.mdhas an entry with the correctSPEC_FILEandSPEC_NAME.
3. Record Scores & Plots
- Score: At run completion, extract
total_reward_mafrom logs (trial_metrics). - Link: Add HuggingFace folder link:
[FOLDER](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/FOLDER) - Pull data:
source .env && uv run hf download SLM-Lab/benchmark --include "data/FOLDER/*" --local-dir hf_data --repo-type dataset - Plot: Generate with folders from table:
slm-lab plot -t "CartPole-v1" -f ppo_cartpole_2026...,dqn_cartpole_2026...
Environment Settings
Standardized settings for fair comparison. The Settings line in each result table shows these values.
| Env Category | num_envs | max_frame | log_frequency | grace_period |
|---|---|---|---|---|
| Classic Control | 4 | 2e5-3e5 | 500 | 1e4 |
| Box2D | 8 | 3e5 | 1000 | 5e4 |
| MuJoCo | 16 | 1e6-10e6 | 1e4 | 1e5-1e6 |
| Atari | 16 | 10e6 | 10000 | 5e5 |
Hyperparameter Search
When algorithm fails to reach target, run search instead of train:
slm-lab run SPEC_FILE SPEC_NAME search # local
source .env && slm-lab run-remote --gpu SPEC_FILE SPEC_NAME search -n NAME # remote
| Stage | Mode | Config | Purpose |
|---|---|---|---|
| ASHA | search |
max_session=1, search_scheduler enabled |
Wide exploration with early stopping |
| Multi | search |
max_session=4, NO search_scheduler |
Robust validation with averaging |
| Validate | train |
Final spec | Confirmation run |
Do not use search result in benchmark results - use final validation run with committed spec.
Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20+ = 5+ dims).
{
"meta": {
"max_session": 1, "max_trial": 16,
"search_resources": {"cpu": 1, "gpu": 0.125},
"search_scheduler": {"grace_period": 1e5, "reduction_factor": 3}
},
"search": {
"agent.algorithm.gamma__uniform": [0.98, 0.999],
"agent.algorithm.lam__uniform": [0.9, 0.98],
"agent.net.optim_spec.lr__loguniform": [1e-4, 1e-3]
}
}
Progress
| Phase | Category | Envs | REINFORCE | SARSA | DQN | DDQN+PER | A2C | PPO | SAC | Overall |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Classic Control | 3 | ✅ | ✅ | ⚠️ | ✅ | ✅ | ✅ | ✅ | Done |
| 2 | Box2D | 2 | N/A | N/A | ⚠️ | ✅ | ❌ | ⚠️ | ⚠️ | Done |
| 3 | MuJoCo | 11 | N/A | N/A | N/A | N/A | N/A | ⚠️ | ⚠️ | Done |
| 4 | Atari | 57 | N/A | N/A | N/A | Skip | Done | Done | Done | Done |
Legend: ✅ Solved | ⚠️ Close (>80%) | 📊 Acceptable | ❌ Failed | 🔄 In progress/Pending | Skip Not started | N/A Not applicable
Results
Phase 1: Classic Control
1.1 CartPole-v1
Docs: CartPole | State: Box(4) | Action: Discrete(2) | Target reward MA > 400
Settings: max_frame 2e5 | num_envs 4 | max_session 4 | log_frequency 500
1.2 Acrobot-v1
Docs: Acrobot | State: Box(6) | Action: Discrete(3) | Target reward MA > -100
Settings: max_frame 3e5 | num_envs 4 | max_session 4 | log_frequency 500
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| DQN | ✅ | -94.17 | slm_lab/spec/benchmark_arc/dqn/dqn_classic_arc.yaml | dqn_boltzmann_acrobot_arc | dqn_boltzmann_acrobot_arc_2026_02_11_144342 |
| DDQN+PER | ✅ | -83.92 | slm_lab/spec/benchmark_arc/dqn/dqn_classic_arc.yaml | ddqn_per_acrobot_arc | ddqn_per_acrobot_arc_2026_02_11_153725 |
| A2C | ✅ | -83.99 | slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml | a2c_gae_acrobot_arc | a2c_gae_acrobot_arc_2026_02_11_153806 |
| PPO | ✅ | -81.28 | slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml | ppo_acrobot_arc | ppo_acrobot_arc_2026_02_11_153758 |
| SAC | ✅ | -92.60 | slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml | sac_acrobot_arc | sac_acrobot_arc_2026_02_11_162211 |
| CrossQ | ✅ | -103.89 | slm_lab/spec/benchmark/crossq/crossq_classic.yaml | crossq_acrobot | crossq_acrobot_2026_02_23_122342 |
1.3 Pendulum-v1
Docs: Pendulum | State: Box(3) | Action: Box(1) | Target reward MA > -200
Settings: max_frame 3e5 | num_envs 4 | max_session 4 | log_frequency 500
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| A2C | ❌ | -820.74 | slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml | a2c_gae_pendulum_arc | a2c_gae_pendulum_arc_2026_02_11_162217 |
| PPO | ✅ | -174.87 | slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml | ppo_pendulum_arc | ppo_pendulum_arc_2026_02_11_162156 |
| SAC | ✅ | -150.97 | slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml | sac_pendulum_arc | sac_pendulum_arc_2026_02_11_162240 |
| CrossQ | ✅ | -163.52 | slm_lab/spec/benchmark/crossq/crossq_classic.yaml | crossq_pendulum | crossq_pendulum_2026_02_21_123841 |
Phase 2: Box2D
2.1 LunarLander-v3
Docs: LunarLander | State: Box(8) | Action: Discrete(4) | Target reward MA > 200
Settings: max_frame 3e5 | num_envs 8 | max_session 4 | log_frequency 1000
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| DQN | ⚠️ | 195.21 | slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml | dqn_concat_lunar_arc | dqn_concat_lunar_arc_2026_02_11_201407 |
| DDQN+PER | ✅ | 265.90 | slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml | ddqn_per_concat_lunar_arc | ddqn_per_concat_lunar_arc_2026_02_13_105115 |
| A2C | ❌ | 27.38 | slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml | a2c_gae_lunar_arc | a2c_gae_lunar_arc_2026_02_11_224304 |
| PPO | ⚠️ | 183.30 | slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml | ppo_lunar_arc | ppo_lunar_arc_2026_02_11_201303 |
| SAC | ⚠️ | 106.17 | slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml | sac_lunar_arc | sac_lunar_arc_2026_02_11_201417 |
| CrossQ | ❌ | 136.25 | slm_lab/spec/benchmark/crossq/crossq_box2d.yaml | crossq_lunar | crossq_lunar_2026_02_21_123730 |
2.2 LunarLanderContinuous-v3
Docs: LunarLander | State: Box(8) | Action: Box(2) | Target reward MA > 200
Settings: max_frame 3e5 | num_envs 8 | max_session 4 | log_frequency 1000
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| A2C | ❌ | -76.81 | slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml | a2c_gae_lunar_continuous_arc | a2c_gae_lunar_continuous_arc_2026_02_11_224301 |
| PPO | ⚠️ | 132.58 | slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml | ppo_lunar_continuous_arc | ppo_lunar_continuous_arc_2026_02_11_224229 |
| SAC | ⚠️ | 125.00 | slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml | sac_lunar_continuous_arc | sac_lunar_continuous_arc_2026_02_12_222203 |
| CrossQ | ✅ | 249.85 | slm_lab/spec/benchmark/crossq/crossq_box2d.yaml | crossq_lunar_continuous | crossq_lunar_continuous_arc_2026_02_21_100052 |
Phase 3: MuJoCo
Docs: MuJoCo environments | State/Action: Continuous | Target: Practical baselines (no official "solved" threshold)
Settings: max_frame 4e6-10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
Algorithms: PPO and SAC. Network: MLP [256,256], orthogonal init. PPO uses tanh activation; SAC uses relu.
Note on SAC frame budgets: SAC uses higher update-to-data ratios (more gradient updates per step), making it more sample-efficient but slower per frame than PPO. SAC benchmarks use 1-4M frames (vs PPO's 4-10M) to fit within practical GPU wall-time limits (~6h). Scores may still be improving at cutoff.
Spec Files (one file per algorithm, all envs via YAML anchors):
- PPO: ppo_mujoco_arc.yaml
- SAC: sac_mujoco_arc.yaml
Spec Variants: Each file has a base config (shared via YAML anchors) with per-env overrides:
| SPEC_NAME | Envs | Key Config |
|---|---|---|
| ppo_mujoco_arc | HalfCheetah, Walker, Humanoid, HumanoidStandup | Base: gamma=0.99, lam=0.95, lr=3e-4 |
| ppo_mujoco_longhorizon_arc | Reacher, Pusher | gamma=0.997, lam=0.97, lr=2e-4, entropy=0.001 |
| ppo_{env}_arc | Ant, Hopper, Swimmer, IP, IDP | Per-env tuned (gamma, lam, lr) |
| sac_mujoco_arc | (generic, use with -s flags) | Base: gamma=0.99, iter=4, lr=3e-4, [256,256] |
| sac_{env}_arc | All 11 envs | Per-env tuned (iter, gamma, lr, net size) |
Reproduce: Copy SPEC_NAME and MAX_FRAME from the table below.
# PPO: env and max_frame are parameterized via -s flags
source .env && slm-lab run-remote --gpu -s env=ENV -s max_frame=MAX_FRAME \
slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml SPEC_NAME train -n NAME
# SAC: env and max_frame are hardcoded per spec — no -s flags needed
source .env && slm-lab run-remote --gpu \
slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml SPEC_NAME train -n NAME
| ENV | SPEC_NAME | MAX_FRAME |
|---|---|---|
| Ant-v5 | ppo_ant_arc | 10e6 |
| sac_ant_arc | 2e6 | |
| HalfCheetah-v5 | ppo_mujoco_arc | 10e6 |
| sac_halfcheetah_arc | 4e6 | |
| Hopper-v5 | ppo_hopper_arc | 4e6 |
| sac_hopper_arc | 3e6 | |
| Humanoid-v5 | ppo_mujoco_arc | 10e6 |
| sac_humanoid_arc | 1e6 | |
| HumanoidStandup-v5 | ppo_mujoco_arc | 4e6 |
| sac_humanoid_standup_arc | 1e6 | |
| InvertedDoublePendulum-v5 | ppo_inverted_double_pendulum_arc | 10e6 |
| sac_inverted_double_pendulum_arc | 2e6 | |
| InvertedPendulum-v5 | ppo_inverted_pendulum_arc | 4e6 |
| sac_inverted_pendulum_arc | 2e6 | |
| Pusher-v5 | ppo_mujoco_longhorizon_arc | 4e6 |
| sac_pusher_arc | 1e6 | |
| Reacher-v5 | ppo_mujoco_longhorizon_arc | 4e6 |
| sac_reacher_arc | 1e6 | |
| Swimmer-v5 | ppo_swimmer_arc | 4e6 |
| sac_swimmer_arc | 2e6 | |
| Walker2d-v5 | ppo_mujoco_arc | 10e6 |
| sac_walker2d_arc | 3e6 |
3.1 Ant-v5
Docs: Ant | State: Box(105) | Action: Box(8) | Target reward MA > 2000
Settings: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| PPO | ✅ | 2138.28 | slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml | ppo_ant_arc | ppo_ant_arc_ant_2026_02_12_190644 |
| SAC | ✅ | 4942.91 | slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml | sac_ant_arc | sac_ant_arc_2026_02_11_225529 |
| CrossQ | ✅ | 5108.47 | slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml | crossq_ant_ln_7m | crossq_ant_ln_7m_2026_02_22_015136 |
3.2 HalfCheetah-v5
Docs: HalfCheetah | State: Box(17) | Action: Box(6) | Target reward MA > 5000
Settings: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| PPO | ✅ | 6240.68 | slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml | ppo_mujoco_arc | ppo_mujoco_arc_halfcheetah_2026_02_12_195553 |
| SAC | ✅ | 9815.16 | slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml | sac_halfcheetah_arc | sac_halfcheetah_4m_i2_arc_2026_02_14_185522 |
| CrossQ | ✅ | 9969.18 | slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml | crossq_halfcheetah_ln_8m | crossq_halfcheetah_ln_8m_2026_02_22_111117 |
3.3 Hopper-v5
Docs: Hopper | State: Box(11) | Action: Box(3) | Target reward MA ~ 2000
Settings: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| PPO | ⚠️ | 1653.74 | slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml | ppo_hopper_arc | ppo_hopper_arc_hopper_2026_02_12_222206 |
| SAC | ⚠️ | 1416.52 | slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml | sac_hopper_arc | sac_hopper_3m_i4_arc_2026_02_14_185434 |
| CrossQ | ⚠️ | 1295.21 | slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml | crossq_hopper | crossq_hopper_2026_02_21_173921 |
3.4 Humanoid-v5
Docs: Humanoid | State: Box(348) | Action: Box(17) | Target reward MA > 1000
Settings: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| PPO | ✅ | 2661.26 | slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml | ppo_mujoco_arc | ppo_mujoco_arc_humanoid_2026_02_12_185439 |
| SAC | ✅ | 1989.65 | slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml | sac_humanoid_arc | sac_humanoid_arc_2026_02_12_020016 |
| CrossQ | ✅ | 1850.44 | slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml | crossq_humanoid_ln_i2 | crossq_humanoid_ln_i2_2026_02_22_014755 |
3.5 HumanoidStandup-v5
Docs: HumanoidStandup | State: Box(348) | Action: Box(17) | Target reward MA > 100000
Settings: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| PPO | ✅ | 150104.59 | slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml | ppo_mujoco_arc | ppo_mujoco_arc_humanoidstandup_2026_02_12_115050 |
| SAC | ✅ | 137357.00 | slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml | sac_humanoid_standup_arc | sac_humanoid_standup_arc_2026_02_12_225150 |
| CrossQ | ✅ | 154162.28 | slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml | crossq_humanoid_standup_v2 | crossq_humanoid_standup_v2_2026_02_22_155517 |
3.6 InvertedDoublePendulum-v5
Docs: InvertedDoublePendulum | State: Box(9) | Action: Box(1) | Target reward MA ~8000
Settings: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| PPO | ✅ | 8383.76 | slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml | ppo_inverted_double_pendulum_arc | ppo_inverted_double_pendulum_arc_inverteddoublependulum_2026_02_12_225231 |
| SAC | ✅ | 9032.67 | slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml | sac_inverted_double_pendulum_arc | sac_inverted_double_pendulum_arc_2026_02_12_025206 |
| CrossQ | ⚠️ | 8255.82 | slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml | crossq_inverted_double_pendulum_v2 | crossq_inverted_double_pendulum_v2_2026_02_22_155616 |
3.7 InvertedPendulum-v5
Docs: InvertedPendulum | State: Box(4) | Action: Box(1) | Target reward MA ~1000
Settings: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| PPO | ✅ | 949.94 | slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml | ppo_inverted_pendulum_arc | ppo_inverted_pendulum_arc_invertedpendulum_2026_02_12_062037 |
| SAC | ✅ | 928.43 | slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml | sac_inverted_pendulum_arc | sac_inverted_pendulum_arc_2026_02_12_225503 |
| CrossQ | ⚠️ | 841.87 | slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml | crossq_inverted_pendulum | crossq_inverted_pendulum_2026_02_21_134607 |
3.8 Pusher-v5
Docs: Pusher | State: Box(23) | Action: Box(7) | Target reward MA > -50
Settings: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| PPO | ✅ | -49.59 | slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml | ppo_mujoco_longhorizon_arc | ppo_mujoco_longhorizon_arc_pusher_2026_02_12_222228 |
| SAC | ✅ | -43.00 | slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml | sac_pusher_arc | sac_pusher_arc_2026_02_12_053603 |
| CrossQ | ✅ | -37.08 | slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml | crossq_pusher | crossq_pusher_2026_02_21_134637 |
3.9 Reacher-v5
Docs: Reacher | State: Box(10) | Action: Box(2) | Target reward MA > -10
Settings: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| PPO | ✅ | -5.03 | slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml | ppo_mujoco_longhorizon_arc | ppo_mujoco_longhorizon_arc_reacher_2026_02_12_115033 |
| SAC | ✅ | -6.31 | slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml | sac_reacher_arc | sac_reacher_arc_2026_02_12_055200 |
| CrossQ | ✅ | -5.66 | slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml | crossq_reacher | crossq_reacher_2026_02_21_134606 |
3.10 Swimmer-v5
Docs: Swimmer | State: Box(8) | Action: Box(2) | Target reward MA > 200
Settings: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| PPO | ✅ | 282.44 | slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml | ppo_swimmer_arc | ppo_swimmer_arc_swimmer_2026_02_12_100445 |
| SAC | ✅ | 301.34 | slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml | sac_swimmer_arc | sac_swimmer_arc_2026_02_12_054349 |
| CrossQ | ✅ | 221.12 | slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml | crossq_swimmer | crossq_swimmer_2026_02_21_134711 |
3.11 Walker2d-v5
Docs: Walker2d | State: Box(17) | Action: Box(6) | Target reward MA > 3500
Settings: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|---|---|---|---|---|---|
| PPO | ✅ | 4378.62 | slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml | ppo_mujoco_arc | ppo_mujoco_arc_walker2d_2026_02_12_190312 |
| SAC | ⚠️ | 3123.66 | slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml | sac_walker2d_arc | sac_walker2d_3m_i4_arc_2026_02_14_185550 |
| CrossQ | ✅ | 4277.15 | slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml | crossq_walker2d_ln_7m | crossq_walker2d_ln_7m_2026_02_22_014846 |
Phase 4: Atari
Docs: Atari environments | State: Box(84,84,4 after preprocessing) | Action: Discrete(4-18, game-dependent)
Settings: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 10000
Environment:
- Gymnasium ALE v5 with
life_loss_info=true - v5 uses sticky actions (
repeat_action_probability=0.25) per Machado et al. (2018) best practices
Algorithm Specs (all use Nature CNN [32,64,64] + 512fc):
- DDQN+PER: Skipped - off-policy variants
6x slower (230 fps vs ~1500 fps), not cost effective at 10M frames - A2C: a2c_atari_arc.yaml - RMSprop (lr=7e-4), training_frequency=32
- PPO: ppo_atari_arc.yaml - AdamW (lr=2.5e-4), minibatch=256, horizon=128, epochs=4, max_frame=10e6
- SAC: sac_atari_arc.yaml - Categorical SAC, AdamW (lr=3e-4), training_iter=3, training_frequency=4, max_frame=2e6
PPO Lambda Variants (table shows best result per game):
| SPEC_NAME | Lambda | Best for |
|---|---|---|
| ppo_atari_arc | 0.95 | Strategic games (default) |
| ppo_atari_lam85_arc | 0.85 | Mixed games |
| ppo_atari_lam70_arc | 0.70 | Action games |
Reproduce:
# A2C (10M frames)
source .env && slm-lab run-remote --gpu -s env=ENV -s max_frame=1e7 \
slm_lab/spec/benchmark_arc/a2c/a2c_atari_arc.yaml a2c_gae_atari_arc train -n NAME
# PPO (10M frames)
source .env && slm-lab run-remote --gpu -s env=ENV -s max_frame=1e7 \
slm_lab/spec/benchmark_arc/ppo/ppo_atari_arc.yaml SPEC_NAME train -n NAME
# SAC (2M frames - off-policy, more sample-efficient but slower per frame)
source .env && slm-lab run-remote --gpu -s env=ENV \
slm_lab/spec/benchmark_arc/sac/sac_atari_arc.yaml sac_atari_arc train -n NAME
Note: HF Data links marked "-" indicate runs completed but not yet uploaded to HuggingFace. Scores are extracted from local trial_metrics.
Training Curves (A2C vs PPO vs SAC):
Skipped: Adventure, MontezumaRevenge, Pitfall, PrivateEye, Venture (hard exploration), ElevatorAction (deprecated env)
PPO Lambda Comparison (click to expand)
| ENV | ppo_atari_arc | ppo_atari_lam85_arc | ppo_atari_lam70_arc |
|---|---|---|---|
| ALE/AirRaid-v5 | 7042.84 | - | - |
| ALE/Alien-v5 | 1789.26 | - | - |
| ALE/Amidar-v5 | - | 584.28 | - |
| ALE/Assault-v5 | - | 4448.16 | - |
| ALE/Asterix-v5 | - | 3235.46 | - |
| ALE/Asteroids-v5 | - | 1577.92 | - |
| ALE/Atlantis-v5 | 848087.19 | - | - |
| ALE/BankHeist-v5 | 1058.25 | - | - |
| ALE/BattleZone-v5 | - | 27176.78 | - |
| ALE/BeamRider-v5 | 2761.75 | - | - |
| ALE/Berzerk-v5 | 835.46 | - | - |
| ALE/Bowling-v5 | 45.02 | - | - |
| ALE/Boxing-v5 | 92.18 | - | - |
| ALE/Breakout-v5 | - | - | 326.47 |
| ALE/Carnival-v5 | - | - | 3912.59 |
| ALE/Centipede-v5 | - | - | 4780.75 |
| ALE/ChopperCommand-v5 | 5391.30 | - | - |
| ALE/CrazyClimber-v5 | - | 112094.03 | - |
| ALE/Defender-v5 | - | - | 47894.69 |
| ALE/DemonAttack-v5 | - | - | 19370.38 |
| ALE/DoubleDunk-v5 | -3.03 | - | - |
| ALE/Enduro-v5 | - | 986.46 | - |
| ALE/FishingDerby-v5 | - | 25.71 | - |
| ALE/Freeway-v5 | 32.42 | - | - |
| ALE/Frostbite-v5 | 284.07 | - | - |
| ALE/Gopher-v5 | - | - | 6500.38 |
| ALE/Gravitar-v5 | 602.58 | - | - |
| ALE/Hero-v5 | - | 22477.89 | - |
| ALE/IceHockey-v5 | -4.05 | - | - |
| ALE/Jamesbond-v5 | 710.98 | - | - |
| ALE/JourneyEscape-v5 | - | -1248.98 | - |
| ALE/Kangaroo-v5 | - | - | 10660.35 |
| ALE/Krull-v5 | 7874.33 | - | - |
| ALE/KungFuMaster-v5 | - | - | 28128.04 |
| ALE/MsPacman-v5 | - | 2330.74 | - |
| ALE/NameThisGame-v5 | 6879.23 | - | - |
| ALE/Phoenix-v5 | - | - | 13923.26 |
| ALE/Pong-v5 | - | 16.69 | - |
| ALE/Pooyan-v5 | - | - | 5308.66 |
| ALE/Qbert-v5 | 15460.48 | - | - |
| ALE/Riverraid-v5 | - | 9599.75 | - |
| ALE/RoadRunner-v5 | - | 37980.95 | - |
| ALE/Robotank-v5 | 21.04 | - | - |
| ALE/Seaquest-v5 | 1775.14 | - | - |
| ALE/Skiing-v5 | -28217.28 | - | - |
| ALE/Solaris-v5 | 2212.78 | - | - |
| ALE/SpaceInvaders-v5 | 892.49 | - | - |
| ALE/StarGunner-v5 | - | - | 49328.73 |
| ALE/Surround-v5 | -4.47 | - | - |
| ALE/Tennis-v5 | - | -12.27 | - |
| ALE/TimePilot-v5 | 4432.73 | - | - |
| ALE/Tutankham-v5 | - | 210.87 | - |
| ALE/UpNDown-v5 | - | 147168.80 | - |
| ALE/VideoPinball-v5 | - | - | 38370.30 |
| ALE/WizardOfWor-v5 | 6100.42 | - | - |
| ALE/YarsRevenge-v5 | 12873.91 | - | - |
| ALE/Zaxxon-v5 | 9523.49 | - | - |
Legend: Bold = Best score | - = Not tested








































































