Datasets:
Tasks:
Reinforcement Learning
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
1K - 10K
License:
Upload folder using huggingface_hub
Browse files- docs/BENCHMARKS.md +504 -0
- docs/plots/Acrobot-v1_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Ant-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/CartPole-v1_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/HalfCheetah-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Hopper-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Humanoid-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/HumanoidStandup-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/InvertedDoublePendulum-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/InvertedPendulum-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/LunarLander-v3_Continuous_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/LunarLander-v3_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Pendulum-v1_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Pusher-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Reacher-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Swimmer-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Walker2d-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
docs/BENCHMARKS.md
ADDED
|
@@ -0,0 +1,504 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SLM-Lab Benchmarks
|
| 2 |
+
|
| 3 |
+
Reproducible deep RL algorithm validation across Gymnasium environments (Classic Control, Box2D, MuJoCo, Atari).
|
| 4 |
+
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
## Usage
|
| 8 |
+
|
| 9 |
+
After [installation](../README.md#quick-start), copy `SPEC_FILE` and `SPEC_NAME` from result tables below (Atari uses one shared spec file - see [Phase 4](#phase-4-atari)).
|
| 10 |
+
|
| 11 |
+
### Running Benchmarks
|
| 12 |
+
|
| 13 |
+
**Local** - runs on your machine (Classic Control: minutes):
|
| 14 |
+
```bash
|
| 15 |
+
slm-lab run SPEC_FILE SPEC_NAME train
|
| 16 |
+
```
|
| 17 |
+
|
| 18 |
+
**Remote** - cloud GPU via [dstack](https://dstack.ai), auto-syncs to HuggingFace:
|
| 19 |
+
```bash
|
| 20 |
+
source .env && slm-lab run-remote --gpu SPEC_FILE SPEC_NAME train -n NAME
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
Remote setup: `cp .env.example .env` then set `HF_TOKEN`. See [README](../README.md#cloud-training-dstack) for dstack config.
|
| 24 |
+
|
| 25 |
+
### Atari
|
| 26 |
+
|
| 27 |
+
All games share one spec file (54 tested, 5 hard exploration skipped). Use `-s env=ENV` to substitute. Runs take ~2-3 hours on GPU.
|
| 28 |
+
|
| 29 |
+
```bash
|
| 30 |
+
source .env && slm-lab run-remote --gpu -s env=ALE/Pong-v5 slm_lab/spec/benchmark/ppo/ppo_atari.json ppo_atari train -n pong
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
### Download Results
|
| 34 |
+
|
| 35 |
+
Trained models and metrics sync to [HuggingFace](https://huggingface.co/datasets/SLM-Lab/benchmark). Pull locally:
|
| 36 |
+
```bash
|
| 37 |
+
source .env && slm-lab pull SPEC_NAME
|
| 38 |
+
slm-lab list # see available experiments
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
### Benchmark Contribution
|
| 42 |
+
To ensure benchmark integrity, follow these steps when adding or updating results:
|
| 43 |
+
|
| 44 |
+
#### 1. Audit Spec Settings
|
| 45 |
+
* **Before Running**: Ensure `spec.json` matches the **Settings** line defined in each benchmark table.
|
| 46 |
+
* **Example**: `max_frame 3e5 | num_envs 4 | max_session 4 | log_frequency 500`
|
| 47 |
+
* **After Pulling**: Verify the downloaded `spec.json` matches these rules before using the data.
|
| 48 |
+
|
| 49 |
+
#### 2. Run Benchmark & Commit Specs
|
| 50 |
+
* **Run**: Execute the benchmark locally or remotely using the commands in [Usage](#usage).
|
| 51 |
+
* **Commit Specs**: Always commit the `spec.json` file used for the run to the repo.
|
| 52 |
+
* **Table Entry**: Ensure `BENCHMARKS.md` has an entry with the correct `SPEC_FILE` and `SPEC_NAME`.
|
| 53 |
+
|
| 54 |
+
#### 3. Record Scores & Plots
|
| 55 |
+
* **Score**: At run completion, extract `total_reward_ma` from logs (`trial_metrics`).
|
| 56 |
+
* **Link**: Add HuggingFace folder link: `[FOLDER](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/FOLDER)`
|
| 57 |
+
* **Pull data**: `source .env && uv run hf download SLM-Lab/benchmark-dev --include "data/FOLDER/*" --local-dir hf_data --repo-type dataset`
|
| 58 |
+
* **Plot**: Generate with folders from table:
|
| 59 |
+
```bash
|
| 60 |
+
slm-lab plot -t "CartPole-v1" -f ppo_cartpole_2026...,dqn_cartpole_2026...
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
### Environment Settings
|
| 64 |
+
|
| 65 |
+
Standardized settings for fair comparison. The **Settings** line in each result table shows these values.
|
| 66 |
+
|
| 67 |
+
| Env Category | num_envs | max_frame | log_frequency | grace_period |
|
| 68 |
+
|--------------|----------|-----------|---------------|--------------|
|
| 69 |
+
| Classic Control | 4 | 2e5-3e5 | 500 | 1e4 |
|
| 70 |
+
| Box2D | 8 | 3e5 | 1000 | 5e4 |
|
| 71 |
+
| MuJoCo | 16 | 1e6-10e6 | 1e4 | 1e5-1e6 |
|
| 72 |
+
| Atari | 16 | 10e6 | 10000 | 5e5 |
|
| 73 |
+
|
| 74 |
+
### Hyperparameter Search
|
| 75 |
+
|
| 76 |
+
When algorithm fails to reach target, run search instead of train:
|
| 77 |
+
|
| 78 |
+
```bash
|
| 79 |
+
slm-lab run SPEC_FILE SPEC_NAME search # local
|
| 80 |
+
source .env && slm-lab run-remote --gpu SPEC_FILE SPEC_NAME search -n NAME # remote
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
| Stage | Mode | Config | Purpose |
|
| 84 |
+
|-------|------|--------|---------|
|
| 85 |
+
| ASHA | `search` | `max_session=1`, `search_scheduler` enabled | Wide exploration with early stopping |
|
| 86 |
+
| Multi | `search` | `max_session=4`, NO `search_scheduler` | Robust validation with averaging |
|
| 87 |
+
| Validate | `train` | Final spec | Confirmation run |
|
| 88 |
+
|
| 89 |
+
> Do not use search result in benchmark results - use final validation run with committed spec.
|
| 90 |
+
|
| 91 |
+
Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20+ = 5+ dims).
|
| 92 |
+
|
| 93 |
+
```json
|
| 94 |
+
{
|
| 95 |
+
"meta": {
|
| 96 |
+
"max_session": 1, "max_trial": 16,
|
| 97 |
+
"search_resources": {"cpu": 1, "gpu": 0.125},
|
| 98 |
+
"search_scheduler": {"grace_period": 1e5, "reduction_factor": 3}
|
| 99 |
+
},
|
| 100 |
+
"search": {
|
| 101 |
+
"agent.algorithm.gamma__uniform": [0.98, 0.999],
|
| 102 |
+
"agent.algorithm.lam__uniform": [0.9, 0.98],
|
| 103 |
+
"agent.net.optim_spec.lr__loguniform": [1e-4, 1e-3]
|
| 104 |
+
}
|
| 105 |
+
}
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
+
|
| 109 |
+
|
| 110 |
+
## Progress
|
| 111 |
+
|
| 112 |
+
| Phase | Category | Envs | REINFORCE | SARSA | DQN | DDQN+PER | A2C | PPO | SAC | Overall |
|
| 113 |
+
|-------|----------|------|-----------|-------|-----|----------|-----|-----|-----|---------|
|
| 114 |
+
| 1 | Classic Control | 3 | 🔄 | 🔄 | 🔄 | 🔄 | 🔄 | 🔄 | 🔄 | Rerun pending |
|
| 115 |
+
| 2 | Box2D | 2 | N/A | N/A | 🔄 | 🔄 | 🔄 | 🔄 | 🔄 | Rerun pending |
|
| 116 |
+
| 3 | MuJoCo | 11 | N/A | N/A | N/A | N/A | 🔄 | 🔄 | 🔄 | Rerun pending |
|
| 117 |
+
| 4 | Atari | 59 | N/A | N/A | Skip | Skip | Skip | 🔄 | N/A | **54 games** (not in this rerun) |
|
| 118 |
+
|
| 119 |
+
**Legend**: ✅ Solved | ⚠️ Close (>80%) | 📊 Acceptable | ❌ Failed | 🔄 In progress/Pending | Skip Not started | N/A Not applicable
|
| 120 |
+
|
| 121 |
+
---
|
| 122 |
+
|
| 123 |
+
## Results
|
| 124 |
+
|
| 125 |
+
### Phase 1: Classic Control
|
| 126 |
+
|
| 127 |
+
#### 1.1 CartPole-v1
|
| 128 |
+
|
| 129 |
+
**Docs**: [CartPole](https://gymnasium.farama.org/environments/classic_control/cart_pole/) | State: Box(4) | Action: Discrete(2) | Target reward MA > 400
|
| 130 |
+
|
| 131 |
+
**Settings**: max_frame 2e5 | num_envs 4 | max_session 4 | log_frequency 500
|
| 132 |
+
|
| 133 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 134 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 135 |
+
| REINFORCE | ✅ | 469.68 | [slm_lab/spec/benchmark/reinforce/reinforce_cartpole.json](../slm_lab/spec/benchmark/reinforce/reinforce_cartpole.json) | reinforce_cartpole | [reinforce_cartpole_2026_01_30_215510](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/reinforce_cartpole_2026_01_30_215510) |
|
| 136 |
+
| SARSA | ✅ | 421.58 | [slm_lab/spec/benchmark/sarsa/sarsa_cartpole.json](../slm_lab/spec/benchmark/sarsa/sarsa_cartpole.json) | sarsa_boltzmann_cartpole | [sarsa_boltzmann_cartpole_2026_01_30_215508](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sarsa_boltzmann_cartpole_2026_01_30_215508) |
|
| 137 |
+
| DQN | ⚠️ | 188.07 | [slm_lab/spec/benchmark/dqn/dqn_cartpole.json](../slm_lab/spec/benchmark/dqn/dqn_cartpole.json) | dqn_boltzmann_cartpole | [dqn_boltzmann_cartpole_2026_01_30_215213](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/dqn_boltzmann_cartpole_2026_01_30_215213) |
|
| 138 |
+
| DDQN+PER | ✅ | 432.88 | [slm_lab/spec/benchmark/dqn/dqn_cartpole.json](../slm_lab/spec/benchmark/dqn/dqn_cartpole.json) | ddqn_per_boltzmann_cartpole | [ddqn_per_boltzmann_cartpole_2026_01_30_215454](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ddqn_per_boltzmann_cartpole_2026_01_30_215454) |
|
| 139 |
+
| A2C | ✅ | 499.73 | [slm_lab/spec/benchmark/a2c/a2c_gae_cartpole.json](../slm_lab/spec/benchmark/a2c/a2c_gae_cartpole.json) | a2c_gae_cartpole | [a2c_gae_cartpole_2026_01_30_215337](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_cartpole_2026_01_30_215337) |
|
| 140 |
+
| PPO | ✅ | 499.54 | [slm_lab/spec/benchmark/ppo/ppo_cartpole.json](../slm_lab/spec/benchmark/ppo/ppo_cartpole.json) | ppo_cartpole | [ppo_cartpole_2026_01_30_221924](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_cartpole_2026_01_30_221924) |
|
| 141 |
+
| SAC | ⚠️ | 359.69 | [slm_lab/spec/benchmark/sac/sac_cartpole.json](../slm_lab/spec/benchmark/sac/sac_cartpole.json) | sac_cartpole | [sac_cartpole_2026_01_30_221934](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_cartpole_2026_01_30_221934) |
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
#### 1.2 Acrobot-v1
|
| 146 |
+
|
| 147 |
+
**Docs**: [Acrobot](https://gymnasium.farama.org/environments/classic_control/acrobot/) | State: Box(6) | Action: Discrete(3) | Target reward MA > -100
|
| 148 |
+
|
| 149 |
+
**Settings**: max_frame 3e5 | num_envs 4 | max_session 4 | log_frequency 500
|
| 150 |
+
|
| 151 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 152 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 153 |
+
| DQN | ✅ | -94.81 | [slm_lab/spec/benchmark/dqn/dqn_acrobot.json](../slm_lab/spec/benchmark/dqn/dqn_acrobot.json) | dqn_boltzmann_acrobot | [dqn_boltzmann_acrobot_2026_01_30_215429](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/dqn_boltzmann_acrobot_2026_01_30_215429) |
|
| 154 |
+
| DDQN+PER | ✅ | -85.17 | [slm_lab/spec/benchmark/dqn/ddqn_per_acrobot.json](../slm_lab/spec/benchmark/dqn/ddqn_per_acrobot.json) | ddqn_per_acrobot | [ddqn_per_acrobot_2026_01_30_215436](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ddqn_per_acrobot_2026_01_30_215436) |
|
| 155 |
+
| A2C | ✅ | -83.75 | [slm_lab/spec/benchmark/a2c/a2c_gae_acrobot.json](../slm_lab/spec/benchmark/a2c/a2c_gae_acrobot.json) | a2c_gae_acrobot | [a2c_gae_acrobot_2026_01_30_215413](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_acrobot_2026_01_30_215413) |
|
| 156 |
+
| PPO | ✅ | -81.43 | [slm_lab/spec/benchmark/ppo/ppo_acrobot.json](../slm_lab/spec/benchmark/ppo/ppo_acrobot.json) | ppo_acrobot | [ppo_acrobot_2026_01_30_215352](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_acrobot_2026_01_30_215352) |
|
| 157 |
+
| SAC | ✅ | -97.08 | [slm_lab/spec/benchmark/sac/sac_acrobot.json](../slm_lab/spec/benchmark/sac/sac_acrobot.json) | sac_acrobot | [sac_acrobot_2026_01_30_215401](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_acrobot_2026_01_30_215401) |
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
|
| 161 |
+
#### 1.3 Pendulum-v1
|
| 162 |
+
|
| 163 |
+
**Docs**: [Pendulum](https://gymnasium.farama.org/environments/classic_control/pendulum/) | State: Box(3) | Action: Box(1) | Target reward MA > -200
|
| 164 |
+
|
| 165 |
+
**Settings**: max_frame 3e5 | num_envs 4 | max_session 4 | log_frequency 500
|
| 166 |
+
|
| 167 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 168 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 169 |
+
| A2C | ❌ | -553.00 | [slm_lab/spec/benchmark/a2c/a2c_gae_pendulum.json](../slm_lab/spec/benchmark/a2c/a2c_gae_pendulum.json) | a2c_gae_pendulum | [a2c_gae_pendulum_2026_01_30_215421](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_pendulum_2026_01_30_215421) |
|
| 170 |
+
| PPO | ✅ | -168.26 | [slm_lab/spec/benchmark/ppo/ppo_pendulum.json](../slm_lab/spec/benchmark/ppo/ppo_pendulum.json) | ppo_pendulum | [ppo_pendulum_2026_01_30_215944](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_pendulum_2026_01_30_215944) |
|
| 171 |
+
| SAC | ✅ | -152.33 | [slm_lab/spec/benchmark/sac/sac_pendulum.json](../slm_lab/spec/benchmark/sac/sac_pendulum.json) | sac_pendulum | [sac_pendulum_2026_01_30_215454](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_pendulum_2026_01_30_215454) |
|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
|
| 175 |
+
### Phase 2: Box2D
|
| 176 |
+
|
| 177 |
+
#### 2.1 LunarLander-v3 (Discrete)
|
| 178 |
+
|
| 179 |
+
**Docs**: [LunarLander](https://gymnasium.farama.org/environments/box2d/lunar_lander/) | State: Box(8) | Action: Discrete(4) | Target reward MA > 200
|
| 180 |
+
|
| 181 |
+
**Settings**: max_frame 3e5 | num_envs 8 | max_session 4 | log_frequency 1000
|
| 182 |
+
|
| 183 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 184 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 185 |
+
| DQN | ⚠️ | 183.64 | [slm_lab/spec/benchmark/dqn/dqn_lunar.json](../slm_lab/spec/benchmark/dqn/dqn_lunar.json) | dqn_concat_lunar | [dqn_concat_lunar_2026_01_30_215529](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/dqn_concat_lunar_2026_01_30_215529) |
|
| 186 |
+
| DDQN+PER | ✅ | 261.49 | [slm_lab/spec/benchmark/dqn/ddqn_per_lunar.json](../slm_lab/spec/benchmark/dqn/ddqn_per_lunar.json) | ddqn_per_concat_lunar | [ddqn_per_concat_lunar_2026_01_30_215532](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ddqn_per_concat_lunar_2026_01_30_215532) |
|
| 187 |
+
| A2C | ❌ | 9.53 | [slm_lab/spec/benchmark/a2c/a2c_gae_lunar.json](../slm_lab/spec/benchmark/a2c/a2c_gae_lunar.json) | a2c_gae_lunar | [a2c_gae_lunar_2026_01_30_215529](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/a2c_gae_lunar_2026_01_30_215529) |
|
| 188 |
+
| PPO | ⚠️ | 159.02 | [slm_lab/spec/benchmark/ppo/ppo_lunar.json](../slm_lab/spec/benchmark/ppo/ppo_lunar.json) | ppo_lunar | [ppo_lunar_2026_01_30_215550](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_lunar_2026_01_30_215550) |
|
| 189 |
+
| SAC | ❌ | -75.43 | [slm_lab/spec/benchmark/sac/sac_lunar.json](../slm_lab/spec/benchmark/sac/sac_lunar.json) | sac_lunar | [sac_lunar_2026_01_30_215552](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/sac_lunar_2026_01_30_215552) |
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
|
| 193 |
+
#### 2.2 LunarLander-v3 (Continuous)
|
| 194 |
+
|
| 195 |
+
**Docs**: [LunarLander](https://gymnasium.farama.org/environments/box2d/lunar_lander/) | State: Box(8) | Action: Box(2) | Target reward MA > 200
|
| 196 |
+
|
| 197 |
+
**Settings**: max_frame 3e5 | num_envs 8 | max_session 4 | log_frequency 1000
|
| 198 |
+
|
| 199 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 200 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 201 |
+
| A2C | ❌ | -38.18 | [slm_lab/spec/benchmark/a2c/a2c_gae_lunar.json](../slm_lab/spec/benchmark/a2c/a2c_gae_lunar.json) | a2c_gae_lunar_continuous | [a2c_gae_lunar_continuous_2026_01_30_215630](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/a2c_gae_lunar_continuous_2026_01_30_215630) |
|
| 202 |
+
| PPO | ⚠️ | 165.48 | [slm_lab/spec/benchmark/ppo/ppo_lunar.json](../slm_lab/spec/benchmark/ppo/ppo_lunar.json) | ppo_lunar_continuous | [ppo_lunar_continuous_2026_01_31_104549](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_lunar_continuous_2026_01_31_104549) |
|
| 203 |
+
| SAC | ✅ | 208.60 | [slm_lab/spec/benchmark/sac/sac_lunar.json](../slm_lab/spec/benchmark/sac/sac_lunar.json) | sac_lunar_continuous | [sac_lunar_continuous_2026_01_31_104537](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/sac_lunar_continuous_2026_01_31_104537) |
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
|
| 207 |
+
### Phase 3: MuJoCo
|
| 208 |
+
|
| 209 |
+
**Docs**: [MuJoCo environments](https://gymnasium.farama.org/environments/mujoco/) | State/Action: Continuous | Target: Practical baselines (no official "solved" threshold)
|
| 210 |
+
|
| 211 |
+
**Settings**: max_frame 4e6-10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
|
| 212 |
+
|
| 213 |
+
**Algorithm: PPO only** - SAC omitted (off-policy = heavy compute for systematic benchmarking). Network: MLP [256,256] tanh, orthogonal init.
|
| 214 |
+
|
| 215 |
+
**Spec Variants**: Two unified specs in [ppo_mujoco.json](../slm_lab/spec/benchmark/ppo/ppo_mujoco.json), plus individual specs for edge cases.
|
| 216 |
+
|
| 217 |
+
| SPEC_NAME | Envs | Key Config |
|
| 218 |
+
|-----------|------|------------|
|
| 219 |
+
| ppo_mujoco | HalfCheetah, Walker, Humanoid, HumanoidStandup | gamma=0.99, lam=0.95 |
|
| 220 |
+
| ppo_mujoco_longhorizon | Reacher, Pusher | gamma=0.997, lam=0.97 |
|
| 221 |
+
| Individual specs | Hopper, Swimmer, Ant, IP, IDP | See spec files for tuned hyperparams |
|
| 222 |
+
|
| 223 |
+
**Reproduce**: Copy `ENV`, `SPEC_FILE`, `SPEC_NAME` from table. Use `-s max_frame=` for all specs, add `-s env=` for unified specs:
|
| 224 |
+
```bash
|
| 225 |
+
# Unified specs (ppo_mujoco.json)
|
| 226 |
+
source .env && slm-lab run-remote --gpu -s env=ENV -s max_frame=MAX_FRAME \
|
| 227 |
+
slm_lab/spec/benchmark/ppo/ppo_mujoco.json SPEC_NAME train -n NAME
|
| 228 |
+
|
| 229 |
+
# Individual specs (env hardcoded)
|
| 230 |
+
source .env && slm-lab run-remote --gpu -s max_frame=MAX_FRAME \
|
| 231 |
+
slm_lab/spec/benchmark/ppo/SPEC_FILE SPEC_NAME train -n NAME
|
| 232 |
+
```
|
| 233 |
+
|
| 234 |
+
| ENV | MAX_FRAME | SPEC_FILE | SPEC_NAME |
|
| 235 |
+
|-----|-----------|-----------|-----------|
|
| 236 |
+
| HalfCheetah-v5 | 10e6 | ppo_mujoco.json | ppo_mujoco |
|
| 237 |
+
| Walker2d-v5 | 10e6 | ppo_mujoco.json | ppo_mujoco |
|
| 238 |
+
| Humanoid-v5 | 10e6 | ppo_mujoco.json | ppo_mujoco |
|
| 239 |
+
| HumanoidStandup-v5 | 4e6 | ppo_mujoco.json | ppo_mujoco |
|
| 240 |
+
| Hopper-v5 | 4e6 | ppo_hopper.json | ppo_hopper |
|
| 241 |
+
| Swimmer-v5 | 4e6 | ppo_swimmer.json | ppo_swimmer |
|
| 242 |
+
| Ant-v5 | 10e6 | ppo_ant.json | ppo_ant |
|
| 243 |
+
| Reacher-v5 | 4e6 | ppo_mujoco.json | ppo_mujoco_longhorizon |
|
| 244 |
+
| Pusher-v5 | 4e6 | ppo_mujoco.json | ppo_mujoco_longhorizon |
|
| 245 |
+
| InvertedPendulum-v5 | 4e6 | ppo_inverted_pendulum.json | ppo_inverted_pendulum |
|
| 246 |
+
| InvertedDoublePendulum-v5 | 10e6 | ppo_inverted_double_pendulum.json | ppo_inverted_double_pendulum |
|
| 247 |
+
|
| 248 |
+
#### 3.1 Hopper-v5
|
| 249 |
+
|
| 250 |
+
**Docs**: [Hopper](https://gymnasium.farama.org/environments/mujoco/hopper/) | State: Box(11) | Action: Box(3) | Target reward MA > 2000
|
| 251 |
+
|
| 252 |
+
**Settings**: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
|
| 253 |
+
|
| 254 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 255 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 256 |
+
| PPO | ⚠️ | 1174.57 | [slm_lab/spec/benchmark/ppo/ppo_hopper.json](../slm_lab/spec/benchmark/ppo/ppo_hopper.json) | ppo_hopper | [ppo_hopper_2026_01_30_220138](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_hopper_2026_01_30_220138) |
|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
|
| 260 |
+
#### 3.2 HalfCheetah-v5
|
| 261 |
+
|
| 262 |
+
**Docs**: [HalfCheetah](https://gymnasium.farama.org/environments/mujoco/half_cheetah/) | State: Box(17) | Action: Box(6) | Target reward MA > 5000
|
| 263 |
+
|
| 264 |
+
**Settings**: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
|
| 265 |
+
|
| 266 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 267 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 268 |
+
| PPO | ✅ | 5851.70 | [slm_lab/spec/benchmark/ppo/ppo_mujoco.json](../slm_lab/spec/benchmark/ppo/ppo_mujoco.json) | ppo_mujoco | [ppo_mujoco_halfcheetah_2026_01_30_230302](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_mujoco_halfcheetah_2026_01_30_230302) |
|
| 269 |
+
|
| 270 |
+

|
| 271 |
+
|
| 272 |
+
#### 3.3 Walker2d-v5
|
| 273 |
+
|
| 274 |
+
**Docs**: [Walker2d](https://gymnasium.farama.org/environments/mujoco/walker2d/) | State: Box(17) | Action: Box(6) | Target reward MA > 3500
|
| 275 |
+
|
| 276 |
+
**Settings**: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
|
| 277 |
+
|
| 278 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 279 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 280 |
+
| PPO | ✅ | 4042.07 | [slm_lab/spec/benchmark/ppo/ppo_mujoco.json](../slm_lab/spec/benchmark/ppo/ppo_mujoco.json) | ppo_mujoco | [ppo_mujoco_walker2d_2026_01_30_222124](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_mujoco_walker2d_2026_01_30_222124) |
|
| 281 |
+
|
| 282 |
+

|
| 283 |
+
|
| 284 |
+
#### 3.4 Ant-v5
|
| 285 |
+
|
| 286 |
+
**Docs**: [Ant](https://gymnasium.farama.org/environments/mujoco/ant/) | State: Box(105) | Action: Box(8) | Target reward MA > 2000
|
| 287 |
+
|
| 288 |
+
**Settings**: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
|
| 289 |
+
|
| 290 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 291 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 292 |
+
| PPO | ✅ | 2514.64 | [slm_lab/spec/benchmark/ppo/ppo_ant.json](../slm_lab/spec/benchmark/ppo/ppo_ant.json) | ppo_ant | [ppo_ant_2026_01_31_042006](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_ant_2026_01_31_042006) |
|
| 293 |
+
|
| 294 |
+

|
| 295 |
+
|
| 296 |
+
#### 3.5 Swimmer-v5
|
| 297 |
+
|
| 298 |
+
**Docs**: [Swimmer](https://gymnasium.farama.org/environments/mujoco/swimmer/) | State: Box(8) | Action: Box(2) | Target reward MA > 200
|
| 299 |
+
|
| 300 |
+
**Settings**: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
|
| 301 |
+
|
| 302 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 303 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 304 |
+
| PPO | ✅ | 229.31 | [slm_lab/spec/benchmark/ppo/ppo_swimmer.json](../slm_lab/spec/benchmark/ppo/ppo_swimmer.json) | ppo_swimmer | [ppo_swimmer_2026_01_30_215922](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_swimmer_2026_01_30_215922) |
|
| 305 |
+
|
| 306 |
+

|
| 307 |
+
|
| 308 |
+
#### 3.6 Reacher-v5
|
| 309 |
+
|
| 310 |
+
**Docs**: [Reacher](https://gymnasium.farama.org/environments/mujoco/reacher/) | State: Box(11) | Action: Box(2) | Target reward MA > -10
|
| 311 |
+
|
| 312 |
+
**Settings**: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
|
| 313 |
+
|
| 314 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 315 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 316 |
+
| PPO | ✅ | -5.08 | [slm_lab/spec/benchmark/ppo/ppo_mujoco.json](../slm_lab/spec/benchmark/ppo/ppo_mujoco.json) | ppo_mujoco_longhorizon | [ppo_mujoco_longhorizon_reacher_2026_01_30_215805](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_mujoco_longhorizon_reacher_2026_01_30_215805) |
|
| 317 |
+
|
| 318 |
+

|
| 319 |
+
|
| 320 |
+
#### 3.7 Pusher-v5
|
| 321 |
+
|
| 322 |
+
**Docs**: [Pusher](https://gymnasium.farama.org/environments/mujoco/pusher/) | State: Box(23) | Action: Box(7) | Target reward MA > -50
|
| 323 |
+
|
| 324 |
+
**Settings**: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
|
| 325 |
+
|
| 326 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 327 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 328 |
+
| PPO | ✅ | -49.09 | [slm_lab/spec/benchmark/ppo/ppo_mujoco.json](../slm_lab/spec/benchmark/ppo/ppo_mujoco.json) | ppo_mujoco_longhorizon | [ppo_mujoco_longhorizon_pusher_2026_01_30_215824](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_mujoco_longhorizon_pusher_2026_01_30_215824) |
|
| 329 |
+
|
| 330 |
+

|
| 331 |
+
|
| 332 |
+
#### 3.8 InvertedPendulum-v5
|
| 333 |
+
|
| 334 |
+
**Docs**: [InvertedPendulum](https://gymnasium.farama.org/environments/mujoco/inverted_pendulum/) | State: Box(4) | Action: Box(1) | Target reward MA ~1000
|
| 335 |
+
|
| 336 |
+
**Settings**: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
|
| 337 |
+
|
| 338 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 339 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 340 |
+
| PPO | ✅ | 944.87 | [slm_lab/spec/benchmark/ppo/ppo_inverted_pendulum.json](../slm_lab/spec/benchmark/ppo/ppo_inverted_pendulum.json) | ppo_inverted_pendulum | [ppo_inverted_pendulum_2026_01_30_230211](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_inverted_pendulum_2026_01_30_230211) |
|
| 341 |
+
|
| 342 |
+

|
| 343 |
+
|
| 344 |
+
#### 3.9 InvertedDoublePendulum-v5
|
| 345 |
+
|
| 346 |
+
**Docs**: [InvertedDoublePendulum](https://gymnasium.farama.org/environments/mujoco/inverted_double_pendulum/) | State: Box(11) | Action: Box(1) | Target reward MA ~8000
|
| 347 |
+
|
| 348 |
+
**Settings**: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
|
| 349 |
+
|
| 350 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 351 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 352 |
+
| PPO | ✅ | 7622.00 | [slm_lab/spec/benchmark/ppo/ppo_inverted_double_pendulum.json](../slm_lab/spec/benchmark/ppo/ppo_inverted_double_pendulum.json) | ppo_inverted_double_pendulum | [ppo_inverted_double_pendulum_2026_01_30_220651](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_inverted_double_pendulum_2026_01_30_220651) |
|
| 353 |
+
|
| 354 |
+

|
| 355 |
+
|
| 356 |
+
#### 3.10 Humanoid-v5
|
| 357 |
+
|
| 358 |
+
**Docs**: [Humanoid](https://gymnasium.farama.org/environments/mujoco/humanoid/) | State: Box(376) | Action: Box(17) | Target reward MA > 1000
|
| 359 |
+
|
| 360 |
+
**Settings**: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 1e4
|
| 361 |
+
|
| 362 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 363 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 364 |
+
| PPO | ✅ | 3774.08 | [slm_lab/spec/benchmark/ppo/ppo_mujoco.json](../slm_lab/spec/benchmark/ppo/ppo_mujoco.json) | ppo_mujoco | [ppo_mujoco_humanoid_2026_01_30_222339](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_mujoco_humanoid_2026_01_30_222339) |
|
| 365 |
+
|
| 366 |
+

|
| 367 |
+
|
| 368 |
+
#### 3.11 HumanoidStandup-v5
|
| 369 |
+
|
| 370 |
+
**Docs**: [HumanoidStandup](https://gymnasium.farama.org/environments/mujoco/humanoid_standup/) | State: Box(376) | Action: Box(17) | Target reward MA > 100000
|
| 371 |
+
|
| 372 |
+
**Settings**: max_frame 4e6 | num_envs 16 | max_session 4 | log_frequency 1e4
|
| 373 |
+
|
| 374 |
+
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Repo |
|
| 375 |
+
|-----------|--------|-----|-----------|-----------|---------|
|
| 376 |
+
| PPO | ✅ | 165841.17 | [slm_lab/spec/benchmark/ppo/ppo_mujoco.json](../slm_lab/spec/benchmark/ppo/ppo_mujoco.json) | ppo_mujoco | [ppo_mujoco_humanoidstandup_2026_01_30_215802](https://huggingface.co/datasets/SLM-Lab/benchmark-dev/tree/main/data/ppo_mujoco_humanoidstandup_2026_01_30_215802) |
|
| 377 |
+
|
| 378 |
+

|
| 379 |
+
|
| 380 |
+
### Phase 4: Atari
|
| 381 |
+
|
| 382 |
+
**Docs**: [Atari environments](https://ale.farama.org/environments/) | State: Box(84,84,4 after preprocessing) | Action: Discrete(4-18, game-dependent) | Solved: Game-specific thresholds
|
| 383 |
+
|
| 384 |
+
**Settings**: max_frame 10e6 | num_envs 16 | max_session 4 | log_frequency 10000
|
| 385 |
+
|
| 386 |
+
**Environment**:
|
| 387 |
+
- Gymnasium ALE v5 with `life_loss_info=true`
|
| 388 |
+
- v5 is harder than v4 due to sticky actions (default `repeat_action_probability=0.25` vs v4's 0.0), which randomly repeats agent actions to simulate console stochasticity and prevent memorization, following [Machado et al. (2018)](https://arxiv.org/abs/1709.06009) research best practices. See [ALE version history](https://ale.farama.org/environments/#version-history-and-naming-schemes).
|
| 389 |
+
|
| 390 |
+
**Algorithm: PPO**:
|
| 391 |
+
- **Network**: ConvNet [32,64,64] + 512fc (Nature CNN), orthogonal init, normalize=true, clip_grad_val=0.5
|
| 392 |
+
- **Hyperparams**: AdamW (lr=2.5e-4, eps=1e-5), minibatch_size=256, time_horizon=128, training_epoch=4, clip_eps=0.1, entropy_coef=0.01
|
| 393 |
+
|
| 394 |
+
**Lambda Variants**: All use one spec file ([slm_lab/spec/benchmark/ppo/ppo_atari.json](../slm_lab/spec/benchmark/ppo/ppo_atari.json)), differing only in GAE lambda. Lower lambda = bias toward immediate rewards (action games), higher = longer credit horizon (strategic games).
|
| 395 |
+
|
| 396 |
+
| SPEC_NAME | Lambda | Best for |
|
| 397 |
+
|-----------|--------|----------|
|
| 398 |
+
| ppo_atari | 0.95 | Long-horizon, strategic games (default) |
|
| 399 |
+
| ppo_atari_lam85 | 0.85 | Mixed/moderate games |
|
| 400 |
+
| ppo_atari_lam70 | 0.70 | Fast action games |
|
| 401 |
+
|
| 402 |
+
**Reproduce**: Copy `ENV` from first column, `SPEC_NAME` from column header. All use the same SPEC_FILE:
|
| 403 |
+
```bash
|
| 404 |
+
source .env && slm-lab run-remote --gpu -s env=ENV \
|
| 405 |
+
slm_lab/spec/benchmark/ppo/ppo_atari.json SPEC_NAME train -n NAME
|
| 406 |
+
```
|
| 407 |
+
|
| 408 |
+
| ENV\SPEC_NAME | ppo_atari | ppo_atari_lam85 | ppo_atari_lam70 |
|
| 409 |
+
| -------- | ----------------- | --------------- | --------------- |
|
| 410 |
+
| ALE/Adventure-v5 | Skip | Skip | Skip |
|
| 411 |
+
| ALE/AirRaid-v5 | **8245** | - | - |
|
| 412 |
+
| ALE/Alien-v5 | **1453** | 1353 | 1274 |
|
| 413 |
+
| ALE/Amidar-v5 | 574 | **580** | - |
|
| 414 |
+
| ALE/Assault-v5 | 4059 | **4293** | 3314 |
|
| 415 |
+
| ALE/Asterix-v5 | 2967 | **3482** | - |
|
| 416 |
+
| ALE/Asteroids-v5 | 1497 | **1554** | - |
|
| 417 |
+
| ALE/Atlantis-v5 | **792886** | 754k | 710k |
|
| 418 |
+
| ALE/BankHeist-v5 | **1045** | 1045 | - |
|
| 419 |
+
| ALE/BattleZone-v5 | 21270 | **26383** | 13857 |
|
| 420 |
+
| ALE/BeamRider-v5 | **2765** | - | - |
|
| 421 |
+
| ALE/Berzerk-v5 | **1072** | - | - |
|
| 422 |
+
| ALE/Bowling-v5 | **46.45** | - | - |
|
| 423 |
+
| ALE/Boxing-v5 | **91.17** | - | - |
|
| 424 |
+
| ALE/Breakout-v5 | 191 | 292 | **327** |
|
| 425 |
+
| ALE/Carnival-v5 | 3071 | 3013 | **3967** |
|
| 426 |
+
| ALE/Centipede-v5 | 3917 | - | **4915** |
|
| 427 |
+
| ALE/ChopperCommand-v5 | **5355** | - | - |
|
| 428 |
+
| ALE/CrazyClimber-v5 | 107183 | **107370** | - |
|
| 429 |
+
| ALE/Defender-v5 | 37162 | - | **51439** |
|
| 430 |
+
| ALE/DemonAttack-v5 | 7755 | - | **16558** |
|
| 431 |
+
| ALE/DoubleDunk-v5 | **-2.38** | - | - |
|
| 432 |
+
| ALE/ElevatorAction-v5 | **5446** | 363 | 3933 |
|
| 433 |
+
| ALE/Enduro-v5 | 414 | **898** | 872 |
|
| 434 |
+
| ALE/FishingDerby-v5 | 22.80 | **27.10** | - |
|
| 435 |
+
| ALE/Freeway-v5 | **31.30** | - | - |
|
| 436 |
+
| ALE/Frostbite-v5 | **301** | 275 | 267 |
|
| 437 |
+
| ALE/Gopher-v5 | 4172 | - | **6508** |
|
| 438 |
+
| ALE/Gravitar-v5 | **599** | 253 | 145 |
|
| 439 |
+
| ALE/Hero-v5 | 21052 | **28238** | - |
|
| 440 |
+
| ALE/IceHockey-v5 | **-3.93** | -5.58 | -7.36 |
|
| 441 |
+
| ALE/Jamesbond-v5 | **662** | - | - |
|
| 442 |
+
| ALE/JourneyEscape-v5 | -1582 | **-1252** | -1547 |
|
| 443 |
+
| ALE/Kangaroo-v5 | 2623 | **9912** | - |
|
| 444 |
+
| ALE/Krull-v5 | **7841** | - | - |
|
| 445 |
+
| ALE/KungFuMaster-v5 | 18973 | 28334 | **29068** |
|
| 446 |
+
| ALE/MontezumaRevenge-v5 | Skip | Skip | Skip |
|
| 447 |
+
| ALE/MsPacman-v5 | 2308 | **2372** | 2297 |
|
| 448 |
+
| ALE/NameThisGame-v5 | **5993** | - | - |
|
| 449 |
+
| ALE/Phoenix-v5 | 7940 | - | **15659** |
|
| 450 |
+
| ALE/Pitfall-v5 | Skip | Skip | Skip |
|
| 451 |
+
| ALE/Pong-v5 | 15.01 | **16.91** | 12.85 |
|
| 452 |
+
| ALE/Pooyan-v5 | 4704 | - | **5716** |
|
| 453 |
+
| ALE/PrivateEye-v5 | Skip | Skip | Skip |
|
| 454 |
+
| ALE/Qbert-v5 | **15094** | - | - |
|
| 455 |
+
| ALE/Riverraid-v5 | 7319 | **9428** | - |
|
| 456 |
+
| ALE/RoadRunner-v5 | 24204 | **37015** | - |
|
| 457 |
+
| ALE/Robotank-v5 | **20.07** | 8.24 | 2.59 |
|
| 458 |
+
| ALE/Seaquest-v5 | **1796** | - | - |
|
| 459 |
+
| ALE/Skiing-v5 | **-19340** | -22980 | -29975 |
|
| 460 |
+
| ALE/Solaris-v5 | **2094** | - | - |
|
| 461 |
+
| ALE/SpaceInvaders-v5 | **726** | - | - |
|
| 462 |
+
| ALE/StarGunner-v5 | 31862 | - | **47495** |
|
| 463 |
+
| ALE/Surround-v5 | **-2.52** | - | -6.79 |
|
| 464 |
+
| ALE/Tennis-v5 | -7.66 | **-4.41** | - |
|
| 465 |
+
| ALE/TimePilot-v5 | **4668** | - | - |
|
| 466 |
+
| ALE/Tutankham-v5 | 203 | **217** | - |
|
| 467 |
+
| ALE/UpNDown-v5 | **182472** | - | - |
|
| 468 |
+
| ALE/Venture-v5 | Skip | Skip | Skip |
|
| 469 |
+
| ALE/VideoPinball-v5 | 31385 | - | **56746** |
|
| 470 |
+
| ALE/WizardOfWor-v5 | **5814** | 5466 | 4740 |
|
| 471 |
+
| ALE/YarsRevenge-v5 | **17120** | - | - |
|
| 472 |
+
| ALE/Zaxxon-v5 | **10756** | - | - |
|
| 473 |
+
|
| 474 |
+
**Legend**: **Bold** = Best score | Skip = Hard exploration | - = Not tested
|
| 475 |
+
|
| 476 |
+
---
|
| 477 |
+
|
| 478 |
+
#### Sticky Actions Validation (v5 vs v4-style)
|
| 479 |
+
|
| 480 |
+
Testing hypothesis that lower scores are due to sticky actions (`repeat_action_probability=0.25` in v5 vs `0.0` in v4/CleanRL).
|
| 481 |
+
|
| 482 |
+
**Environment**: Same as above, but with `repeat_action_probability=0.0` (matching CleanRL/old v4 behavior)
|
| 483 |
+
|
| 484 |
+
**Reproduce**: Copy `ENV` from first column:
|
| 485 |
+
```bash
|
| 486 |
+
source .env && slm-lab run-remote --gpu -s env=ENV \
|
| 487 |
+
slm_lab/spec/benchmark/ppo/ppo_atari.json ppo_atari_nosticky train -n NAME
|
| 488 |
+
```
|
| 489 |
+
|
| 490 |
+
**Results** (Testing games with significant regression):
|
| 491 |
+
|
| 492 |
+
| ENV | v5 (sticky=0.25) | v4-style (sticky=0.0) | Diff | % Change |
|
| 493 |
+
| --- | ---------------- | --------------------- | ---- | -------- |
|
| 494 |
+
| ALE/Skiing-v5 | -19340 | - | - | - |
|
| 495 |
+
| ALE/Frostbite-v5 | 301 | - | - | - |
|
| 496 |
+
| ALE/ElevatorAction-v5 | 5446 | - | - | - |
|
| 497 |
+
| ALE/Gravitar-v5 | 599 | - | - | - |
|
| 498 |
+
| ALE/WizardOfWor-v5 | 5814 | - | - | - |
|
| 499 |
+
| ALE/Alien-v5 | 1453 | - | - | - |
|
| 500 |
+
| ALE/KungFuMaster-v5 | 29068 | - | - | - |
|
| 501 |
+
| ALE/Atlantis-v5 | 792886 | - | - | - |
|
| 502 |
+
| ALE/Pong-v5 | 15.01 | - | - | - |
|
| 503 |
+
| ALE/Breakout-v5 | 191 | - | - | - |
|
| 504 |
+
|
docs/plots/Acrobot-v1_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Ant-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/CartPole-v1_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/HalfCheetah-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Hopper-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Humanoid-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/HumanoidStandup-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/InvertedDoublePendulum-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/InvertedPendulum-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/LunarLander-v3_Continuous_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/LunarLander-v3_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Pendulum-v1_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Pusher-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Reacher-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Swimmer-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Walker2d-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|