Datasets:
SLM-Lab Benchmarks
Reproducible deep RL algorithm validation across Gymnasium environments (Classic Control, Box2D, MuJoCo, Atari).
Usage
After installation, copy SPEC_FILE and SPEC_NAME from result tables below (Atari uses one shared spec file - see Phase 4).
Running Benchmarks
Local - runs on your machine (Classic Control: minutes):
slm-lab run SPEC_FILE SPEC_NAME train
Remote - cloud GPU via dstack, auto-syncs to HuggingFace:
source .env && slm-lab run-remote --gpu SPEC_FILE SPEC_NAME train -n NAME
Remote setup: cp .env.example .env then set HF_TOKEN. See README for dstack config.
Atari
All games share one spec file (54 tested, 5 hard exploration skipped). Use -s env=ENV to substitute. Runs take ~2-3 hours on GPU.
source .env && slm-lab run-remote --gpu -s env=ALE/Pong-v5 slm_lab/spec/benchmark/ppo/ppo_atari.json ppo_atari train -n pong
Download Results
Trained models and metrics sync to HuggingFace. Pull locally:
source .env && slm-lab pull SPEC_NAME
slm-lab list # see available experiments
Benchmark Contribution
To ensure benchmark integrity, follow these steps when adding or updating results:
1. Audit Spec Settings
- Before Running: Ensure
spec.jsonmatches the Settings line defined in each benchmark table. - Example:
max_frame 3e5 | num_envs 4 | max_session 4 | log_frequency 500 - After Pulling: Verify the downloaded
spec.jsonmatches these rules before using the data.
2. Run Benchmark & Commit Specs
- Run: Execute the benchmark locally or remotely using the commands in Usage.
- Commit Specs: Always commit the
spec.jsonfile used for the run to the repo. - Table Entry: Ensure
BENCHMARKS.mdhas an entry with the correctSPEC_FILEandSPEC_NAME.
3. Record Scores & Plots
- Score: At run completion, extract
total_reward_mafrom logs (trial_metrics). - Link: Add HuggingFace folder link:
[FOLDER](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/FOLDER) - Pull data:
source .env && uv run hf download SLM-Lab/benchmark --include "data/FOLDER/*" --local-dir hf_data --repo-type dataset - Plot: Generate with folders from table:
slm-lab plot -t "CartPole-v1" -f ppo_cartpole_2026...,dqn_cartpole_2026...
Environment Settings
Standardized settings for fair comparison. The Settings line in each result table shows these values.
| Env Category | num_envs | max_frame | log_frequency | grace_period |
|---|---|---|---|---|
| Classic Control | 4 | 2e5-3e5 | 500 | 1e4 |
| Box2D | 8 | 3e5 | 1000 | 5e4 |
| MuJoCo | 16 | 1e6-10e6 | 1e4 | 1e5-1e6 |
| Atari | 16 | 10e6 | 10000 | 5e5 |
Hyperparameter Search
When algorithm fails to reach target, run search instead of train:
slm-lab run SPEC_FILE SPEC_NAME search # local
source .env && slm-lab run-remote --gpu SPEC_FILE SPEC_NAME search -n NAME # remote
| Stage | Mode | Config | Purpose |
|---|---|---|---|
| ASHA | search |
max_session=1, search_scheduler enabled |
Wide exploration with early stopping |
| Multi | search |
max_session=4, NO search_scheduler |
Robust validation with averaging |
| Validate | train |
Final spec | Confirmation run |
Do not use search result in benchmark results - use final validation run with committed spec.
Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20+ = 5+ dims).
{
"meta": {
"max_session": 1, "max_trial": 16,
"search_resources": {"cpu": 1, "gpu": 0.125},
"search_scheduler": {"grace_period": 1e5, "reduction_factor": 3}
},
"search": {
"agent.algorithm.gamma__uniform": [0.98, 0.999],
"agent.algorithm.lam__uniform": [0.9, 0.98],
"agent.net.optim_spec.lr__loguniform": [1e-4, 1e-3]
}
}
Progress
| Phase | Category | Envs | REINFORCE | SARSA | DQN | DDQN+PER | A2C | PPO | SAC | Overall |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Classic Control | 3 | 🔄 | 🔄 | 🔄 | 🔄 | 🔄 | 🔄 | 🔄 | Rerun pending |
| 2 | Box2D | 2 | N/A | N/A | 🔄 | 🔄 | 🔄 | 🔄 | 🔄 | Rerun pending |
| 3 | MuJoCo | 11 | N/A | N/A | N/A | N/A | 🔄 | 🔄 | 🔄 | Rerun pending |
| 4 | Atari | 59 | N/A | N/A | N/A | Skip | 🔄 | ✅ | N/A | 54 games |
Legend: ✅ Solved | ⚠️ Close (>80%) | 📊 Acceptable | ❌ Failed | 🔄 In progress/Pending | Skip Not started | N/A Not applicable
Results
Phase 1: Classic Control
1.1 CartPole-v1
Docs: CartPole | State: Box(4) | Action: Discrete(2) | Target reward MA > 400
Settings: max_frame 2e5 | num_envs 4 | max_session 4 | log_frequency 500
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| REINFORCE | ✅ | 469.68 | slm_lab/spec/benchmark/reinforce/reinforce_cartpole.json | reinforce_cartpole | reinforce_cartpole_2026_01_30_215510 |
| SARSA | ✅ | 421.58 | slm_lab/spec/benchmark/sarsa/sarsa_cartpole.json | sarsa_boltzmann_cartpole | sarsa_boltzmann_cartpole_2026_01_30_215508 |
| DQN | ⚠️ | 188.07 | slm_lab/spec/benchmark/dqn/dqn_cartpole.json | dqn_boltzmann_cartpole | dqn_boltzmann_cartpole_2026_01_30_215213 |
| DDQN+PER | ✅ | 432.88 | slm_lab/spec/benchmark/dqn/dqn_cartpole.json | ddqn_per_boltzmann_cartpole | ddqn_per_boltzmann_cartpole_2026_01_30_215454 |
| A2C | ✅ | 499.73 | slm_lab/spec/benchmark/a2c/a2c_gae_cartpole.json | a2c_gae_cartpole | a2c_gae_cartpole_2026_01_30_215337 |
| PPO | ✅ | 499.54 | slm_lab/spec/benchmark/ppo/ppo_cartpole.json | ppo_cartpole | ppo_cartpole_2026_01_30_221924 |
| SAC | ⚠️ | 359.69 | slm_lab/spec/benchmark/sac/sac_cartpole.json | sac_cartpole | sac_cartpole_2026_01_30_221934 |
1.2 Acrobot-v1
Docs: Acrobot | State: Box(6) | Action: Discrete(3) | Target reward MA > -100
Settings: max_frame 3e5 | num_envs 4 | max_session 4 | log_frequency 500
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| DQN | ✅ | -94.81 | slm_lab/spec/benchmark/dqn/dqn_acrobot.json | dqn_boltzmann_acrobot | dqn_boltzmann_acrobot_2026_01_30_215429 |
| DDQN+PER | ✅ | -85.17 | slm_lab/spec/benchmark/dqn/ddqn_per_acrobot.json | ddqn_per_acrobot | ddqn_per_acrobot_2026_01_30_215436 |
| A2C | ✅ | -83.75 | slm_lab/spec/benchmark/a2c/a2c_gae_acrobot.json | a2c_gae_acrobot | a2c_gae_acrobot_2026_01_30_215413 |
| PPO | ✅ | -81.43 | slm_lab/spec/benchmark/ppo/ppo_acrobot.json | ppo_acrobot | ppo_acrobot_2026_01_30_215352 |
| SAC | ✅ | -97.08 | slm_lab/spec/benchmark/sac/sac_acrobot.json | sac_acrobot | sac_acrobot_2026_01_30_215401 |
1.3 Pendulum-v1
Docs: Pendulum | State: Box(3) | Action: Box(1) | Target reward MA > -200
Settings: max_frame 3e5 | num_envs 4 | max_session 4 | log_frequency 500
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| A2C | ❌ | -553.00 | slm_lab/spec/benchmark/a2c/a2c_gae_pendulum.json | a2c_gae_pendulum | a2c_gae_pendulum_2026_01_30_215421 |
| PPO | ✅ | -168.26 | slm_lab/spec/benchmark/ppo/ppo_pendulum.json | ppo_pendulum | ppo_pendulum_2026_01_30_215944 |
| SAC | ✅ | -152.33 | slm_lab/spec/benchmark/sac/sac_pendulum.json | sac_pendulum | sac_pendulum_2026_01_30_215454 |
Phase 2: Box2D
2.1 LunarLander-v3 (Discrete)
Docs: LunarLander | State: Box(8) | Action: Discrete(4) | Target reward MA > 200
Settings: max_frame 3e5 | num_envs 8 | max_session 4 | log_frequency 1000
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| DQN | ⚠️ | 183.64 | slm_lab/spec/benchmark/dqn/dqn_lunar.json | dqn_concat_lunar | dqn_concat_lunar_2026_01_30_215529 |
| DDQN+PER | ✅ | 261.49 | slm_lab/spec/benchmark/dqn/ddqn_per_lunar.json | ddqn_per_concat_lunar | ddqn_per_concat_lunar_2026_01_30_215532 |
| A2C | ❌ | 9.53 | slm_lab/spec/benchmark/a2c/a2c_gae_lunar.json | a2c_gae_lunar | a2c_gae_lunar_2026_01_30_215529 |
| PPO | ⚠️ | 159.02 | slm_lab/spec/benchmark/ppo/ppo_lunar.json | ppo_lunar | ppo_lunar_2026_01_30_215550 |
| SAC | ❌ | -75.43 | slm_lab/spec/benchmark/sac/sac_lunar.json | sac_lunar | sac_lunar_2026_01_30_215552 |
2.2 LunarLander-v3 (Continuous)
Docs: LunarLander | State: Box(8) | Action: Box(2) | Target reward MA > 200
Settings: max_frame 3e5 | num_envs 8 | max_session 4 | log_frequency 1000
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| A2C | ❌ | -38.18 | slm_lab/spec/benchmark/a2c/a2c_gae_lunar.json | a2c_gae_lunar_continuous | a2c_gae_lunar_continuous_2026_01_30_215630 |
| PPO | ⚠️ | 165.48 | slm_lab/spec/benchmark/ppo/ppo_lunar.json | ppo_lunar_continuous | ppo_lunar_continuous_2026_01_31_104549 |
| SAC | ✅ | 208.60 | slm_lab/spec/benchmark/sac/sac_lunar.json | sac_lunar_continuous | sac_lunar_continuous_2026_01_31_104537 |
Phase 3: MuJoCo
Docs: MuJoCo environments | State/Action: Continuous | Target: Practical baselines (no official "solved" threshold)
Settings: max_frame 4e6-10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
Algorithm: PPO only - SAC omitted (off-policy = heavy compute for systematic benchmarking). Network: MLP [256,256] tanh, orthogonal init.
Spec Variants: Two unified specs in ppo_mujoco.json, plus individual specs for edge cases.
| SPEC_NAME | Envs | Key Config |
|---|---|---|
| ppo_mujoco | HalfCheetah, Walker, Humanoid, HumanoidStandup | gamma=0.99, lam=0.95 |
| ppo_mujoco_longhorizon | Reacher, Pusher | gamma=0.997, lam=0.97 |
| Individual specs | Hopper, Swimmer, Ant, IP, IDP | See spec files for tuned hyperparams |
Reproduce: Copy ENV, SPEC_FILE, SPEC_NAME from table. Use -s max_frame= for all specs, add -s env= for unified specs:
# Unified specs (ppo_mujoco.json)
source .env && slm-lab run-remote --gpu -s env=ENV -s max_frame=MAX_FRAME \
slm_lab/spec/benchmark/ppo/ppo_mujoco.json SPEC_NAME train -n NAME
# Individual specs (env hardcoded)
source .env && slm-lab run-remote --gpu -s max_frame=MAX_FRAME \
slm_lab/spec/benchmark/ppo/SPEC_FILE SPEC_NAME train -n NAME
| ENV | MAX_FRAME | SPEC_FILE | SPEC_NAME |
|---|---|---|---|
| HalfCheetah-v5 | 10e6 | ppo_mujoco.json | ppo_mujoco |
| Walker2d-v5 | 10e6 | ppo_mujoco.json | ppo_mujoco |
| Humanoid-v5 | 10e6 | ppo_mujoco.json | ppo_mujoco |
| HumanoidStandup-v5 | 4e6 | ppo_mujoco.json | ppo_mujoco |
| Hopper-v5 | 4e6 | ppo_hopper.json | ppo_hopper |
| Swimmer-v5 | 4e6 | ppo_swimmer.json | ppo_swimmer |
| Ant-v5 | 10e6 | ppo_ant.json | ppo_ant |
| Reacher-v5 | 4e6 | ppo_mujoco.json | ppo_mujoco_longhorizon |
| Pusher-v5 | 4e6 | ppo_mujoco.json | ppo_mujoco_longhorizon |
| InvertedPendulum-v5 | 4e6 | ppo_inverted_pendulum.json | ppo_inverted_pendulum |
| InvertedDoublePendulum-v5 | 10e6 | ppo_inverted_double_pendulum.json | ppo_inverted_double_pendulum |
3.1 Hopper-v5
Docs: Hopper | State: Box(11) | Action: Box(3) | Target reward MA ~ 2000
Settings: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| PPO | ✅ | 1972.38 | slm_lab/spec/benchmark/ppo/ppo_hopper.json | ppo_hopper | ppo_hopper_2026_01_31_105438 |
3.2 HalfCheetah-v5
Docs: HalfCheetah | State: Box(17) | Action: Box(6) | Target reward MA > 5000
Settings: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| PPO | ✅ | 5851.70 | slm_lab/spec/benchmark/ppo/ppo_mujoco.json | ppo_mujoco | ppo_mujoco_halfcheetah_2026_01_30_230302 |
3.3 Walker2d-v5
Docs: Walker2d | State: Box(17) | Action: Box(6) | Target reward MA > 3500
Settings: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| PPO | ✅ | 4042.07 | slm_lab/spec/benchmark/ppo/ppo_mujoco.json | ppo_mujoco | ppo_mujoco_walker2d_2026_01_30_222124 |
3.4 Ant-v5
Docs: Ant | State: Box(105) | Action: Box(8) | Target reward MA > 2000
Settings: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| PPO | ✅ | 2514.64 | slm_lab/spec/benchmark/ppo/ppo_ant.json | ppo_ant | ppo_ant_2026_01_31_042006 |
3.5 Swimmer-v5
Docs: Swimmer | State: Box(8) | Action: Box(2) | Target reward MA > 200
Settings: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| PPO | ✅ | 229.31 | slm_lab/spec/benchmark/ppo/ppo_swimmer.json | ppo_swimmer | ppo_swimmer_2026_01_30_215922 |
3.6 Reacher-v5
Docs: Reacher | State: Box(10) | Action: Box(2) | Target reward MA > -10
Settings: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| PPO | ✅ | -5.08 | slm_lab/spec/benchmark/ppo/ppo_mujoco.json | ppo_mujoco_longhorizon | ppo_mujoco_longhorizon_reacher_2026_01_30_215805 |
3.7 Pusher-v5
Docs: Pusher | State: Box(23) | Action: Box(7) | Target reward MA > -50
Settings: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| PPO | ✅ | -49.09 | slm_lab/spec/benchmark/ppo/ppo_mujoco.json | ppo_mujoco_longhorizon | ppo_mujoco_longhorizon_pusher_2026_01_30_215824 |
3.8 InvertedPendulum-v5
Docs: InvertedPendulum | State: Box(4) | Action: Box(1) | Target reward MA ~1000
Settings: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| PPO | ✅ | 944.87 | slm_lab/spec/benchmark/ppo/ppo_inverted_pendulum.json | ppo_inverted_pendulum | ppo_inverted_pendulum_2026_01_30_230211 |
3.9 InvertedDoublePendulum-v5
Docs: InvertedDoublePendulum | State: Box(9) | Action: Box(1) | Target reward MA ~8000
Settings: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| PPO | ✅ | 7622.00 | slm_lab/spec/benchmark/ppo/ppo_inverted_double_pendulum.json | ppo_inverted_double_pendulum | ppo_inverted_double_pendulum_2026_01_30_220651 |
3.10 Humanoid-v5
Docs: Humanoid | State: Box(348) | Action: Box(17) | Target reward MA > 1000
Settings: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| PPO | ✅ | 3774.08 | slm_lab/spec/benchmark/ppo/ppo_mujoco.json | ppo_mujoco | ppo_mujoco_humanoid_2026_01_30_222339 |
3.11 HumanoidStandup-v5
Docs: HumanoidStandup | State: Box(348) | Action: Box(17) | Target reward MA > 100000
Settings: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|---|---|---|---|---|---|
| PPO | ✅ | 165841.17 | slm_lab/spec/benchmark/ppo/ppo_mujoco.json | ppo_mujoco | ppo_mujoco_humanoidstandup_2026_01_30_215802 |
Phase 4: Atari
Docs: Atari environments | State: Box(84,84,4 after preprocessing) | Action: Discrete(4-18, game-dependent)
Settings: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 10000
Environment:
- Gymnasium ALE v5 with
life_loss_info=true - v5 uses sticky actions (
repeat_action_probability=0.25) per Machado et al. (2018) best practices
Algorithm Specs (all use Nature CNN [32,64,64] + 512fc):
- DDQN+PER: Skipped - off-policy variants
6x slower (230 fps vs ~1500 fps), not cost effective at 10M frames - A2C: a2c_gae_atari.json - RMSprop (lr=7e-4), training_frequency=32
- PPO: ppo_atari.json - AdamW (lr=2.5e-4), minibatch=256, horizon=128, epochs=4
PPO Lambda Variants (table shows best result per game):
| SPEC_NAME | Lambda | Best for |
|---|---|---|
| ppo_atari | 0.95 | Strategic games (default) |
| ppo_atari_lam85 | 0.85 | Mixed games |
| ppo_atari_lam70 | 0.70 | Action games |
Reproduce:
# A2C
source .env && slm-lab run-remote --gpu -s env=ENV -s max_frame=1e7 \
slm_lab/spec/benchmark/a2c/a2c_gae_atari.json a2c_gae_atari train -n NAME
# PPO
source .env && slm-lab run-remote --gpu -s env=ENV -s max_frame=1e7 \
slm_lab/spec/benchmark/ppo/ppo_atari.json SPEC_NAME train -n NAME
Training Curves (A2C vs PPO):
Skipped (hard exploration): Adventure, MontezumaRevenge, Pitfall, PrivateEye, Venture
PPO Lambda Comparison (click to expand)
| ENV | ppo_atari | ppo_atari_lam85 | ppo_atari_lam70 |
|---|---|---|---|
| ALE/AirRaid-v5 | 8245 | - | - |
| ALE/Alien-v5 | 1453 | 1353 | 1274 |
| ALE/Amidar-v5 | 574 | 580 | - |
| ALE/Assault-v5 | 4059 | 4293 | 3314 |
| ALE/Asterix-v5 | 2967 | 3482 | - |
| ALE/Asteroids-v5 | 1497 | 1554 | - |
| ALE/Atlantis-v5 | 792886 | 754k | 710k |
| ALE/BankHeist-v5 | 1045 | 1045 | - |
| ALE/BattleZone-v5 | 21270 | 26383 | 13857 |
| ALE/BeamRider-v5 | 2765 | - | - |
| ALE/Berzerk-v5 | 1072 | - | - |
| ALE/Bowling-v5 | 46.45 | - | - |
| ALE/Boxing-v5 | 91.17 | - | - |
| ALE/Breakout-v5 | 191 | 292 | 327 |
| ALE/Carnival-v5 | 3071 | 3013 | 3967 |
| ALE/Centipede-v5 | 3917 | - | 4915 |
| ALE/ChopperCommand-v5 | 5355 | - | - |
| ALE/CrazyClimber-v5 | 107183 | 107370 | - |
| ALE/Defender-v5 | 37162 | - | 51439 |
| ALE/DemonAttack-v5 | 7755 | - | 16558 |
| ALE/DoubleDunk-v5 | -2.38 | - | - |
| ALE/ElevatorAction-v5 | 5446 | 363 | 3933 |
| ALE/Enduro-v5 | 414 | 898 | 872 |
| ALE/FishingDerby-v5 | 22.80 | 27.10 | - |
| ALE/Freeway-v5 | 31.30 | - | - |
| ALE/Frostbite-v5 | 301 | 275 | 267 |
| ALE/Gopher-v5 | 4172 | - | 6508 |
| ALE/Gravitar-v5 | 599 | 253 | 145 |
| ALE/Hero-v5 | 21052 | 28238 | - |
| ALE/IceHockey-v5 | -3.93 | -5.58 | -7.36 |
| ALE/Jamesbond-v5 | 662 | - | - |
| ALE/JourneyEscape-v5 | -1582 | -1252 | -1547 |
| ALE/Kangaroo-v5 | 2623 | 9912 | - |
| ALE/Krull-v5 | 7841 | - | - |
| ALE/KungFuMaster-v5 | 18973 | 28334 | 29068 |
| ALE/MsPacman-v5 | 2308 | 2372 | 2297 |
| ALE/NameThisGame-v5 | 5993 | - | - |
| ALE/Phoenix-v5 | 7940 | - | 15659 |
| ALE/Pong-v5 | 15.01 | 16.91 | 12.85 |
| ALE/Pooyan-v5 | 4704 | - | 5716 |
| ALE/Qbert-v5 | 15094 | - | - |
| ALE/Riverraid-v5 | 7319 | 9428 | - |
| ALE/RoadRunner-v5 | 24204 | 37015 | - |
| ALE/Robotank-v5 | 20.07 | 8.24 | 2.59 |
| ALE/Seaquest-v5 | 1796 | - | - |
| ALE/Skiing-v5 | -19340 | -22980 | -29975 |
| ALE/Solaris-v5 | 2094 | - | - |
| ALE/SpaceInvaders-v5 | 726 | - | - |
| ALE/StarGunner-v5 | 31862 | - | 47495 |
| ALE/Surround-v5 | -2.52 | - | -6.79 |
| ALE/Tennis-v5 | -7.66 | -4.41 | - |
| ALE/TimePilot-v5 | 4668 | - | - |
| ALE/Tutankham-v5 | 203 | 217 | - |
| ALE/UpNDown-v5 | 182472 | - | - |
| ALE/VideoPinball-v5 | 31385 | - | 56746 |
| ALE/WizardOfWor-v5 | 5814 | 5466 | 4740 |
| ALE/YarsRevenge-v5 | 17120 | - | - |
| ALE/Zaxxon-v5 | 10756 | - | - |
Legend: Bold = Best score | - = Not tested









































































