Upload folder using huggingface_hub
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- docs/BENCHMARKS.md +0 -0
- docs/CHANGELOG.md +272 -0
- docs/plots/Acrobot-v1_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/AirRaid-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Alien-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Amidar-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Ant-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Assault-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Asterix-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Asteroids-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Atlantis-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/BankHeist-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/BattleZone-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/BeamRider-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Berzerk-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Bowling-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Boxing-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Breakout-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Carnival-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/CartPole-v1_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Centipede-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/ChopperCommand-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/CrazyClimber-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Defender-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/DemonAttack-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/DoubleDunk-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Enduro-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/FishingDerby-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Freeway-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Frostbite-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Gopher-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Gravitar-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/HalfCheetah-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Hero-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Hopper-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Humanoid-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/HumanoidStandup-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/IceHockey-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/InvertedDoublePendulum-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/InvertedPendulum-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/Jamesbond-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/JourneyEscape-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Kangaroo-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Krull-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/KungFuMaster-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/LunarLander-v3_Continuous_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/LunarLander-v3_Discrete_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
- docs/plots/MsPacman-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/NameThisGame-v5_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Pendulum-v1_multi_trial_graph_mean_returns_ma_vs_frames.png +2 -2
docs/BENCHMARKS.md
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
docs/CHANGELOG.md
ADDED
|
@@ -0,0 +1,272 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SLM-Lab v5.1.0
|
| 2 |
+
|
| 3 |
+
TorchArc YAML benchmarks replace original hardcoded network architectures across all benchmark categories.
|
| 4 |
+
|
| 5 |
+
- **TorchArc integration**: All algorithms (REINFORCE, SARSA, DQN, DDQN+PER, A2C, PPO, SAC) now use TorchArc YAML-defined networks instead of hardcoded PyTorch modules
|
| 6 |
+
- **Full benchmark validation**: Classic Control, Box2D, MuJoCo (11 envs), and Atari (54 games) re-benchmarked with TorchArc — results match or exceed original scores
|
| 7 |
+
- **SAC Atari**: New SAC Atari benchmarks (48 games) with discrete action support
|
| 8 |
+
- **Pre-commit hooks**: Conventional commit message validation via `.githooks/commit-msg`
|
| 9 |
+
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# SLM-Lab v5.0.0
|
| 13 |
+
|
| 14 |
+
Modernization release for the current RL ecosystem. Updates SLM-Lab from OpenAI Gym to Gymnasium, adds correct handling of episode termination (the `terminated`/`truncated` fix), and migrates to modern Python tooling.
|
| 15 |
+
|
| 16 |
+
**TL;DR:** Install with `uv sync`, run with `slm-lab run`. Specs are simpler (no more `body` section or array wrappers). Environment names changed (`CartPole-v1`, `ALE/Pong-v5`, `Hopper-v5`). Code structure preserved for book readers.
|
| 17 |
+
|
| 18 |
+
> **Book readers:** For exact code from *Foundations of Deep Reinforcement Learning*, use `git checkout v4.1.1`
|
| 19 |
+
|
| 20 |
+
---
|
| 21 |
+
|
| 22 |
+
## Why This Release
|
| 23 |
+
|
| 24 |
+
SLM-Lab was created as an educational framework for deep reinforcement learning, accompanying *Foundations of Deep Reinforcement Learning*. The code prioritizes clarity and correctness—it should help you understand RL algorithms, not just run them.
|
| 25 |
+
|
| 26 |
+
Since v4, the RL ecosystem changed significantly:
|
| 27 |
+
|
| 28 |
+
- **OpenAI Gym is deprecated.** The Farama Foundation forked it as [Gymnasium](https://gymnasium.farama.org/), now the standard. Gym's `done` flag conflated two concepts: true termination (agent failed/succeeded) and time-limit truncation. Gymnasium fixes this with separate `terminated` and `truncated` signals—important for correct value estimation (see [below](#the-gymnasium-api-change)).
|
| 29 |
+
|
| 30 |
+
- **Roboschool is abandoned.** MuJoCo became free in 2022, so roboschool is no longer maintained. Gymnasium includes native MuJoCo bindings.
|
| 31 |
+
|
| 32 |
+
- **Python tooling modernized.** `conda` + `setup.py` → `uv` + `pyproject.toml`. Python 3.12+, PyTorch 2.8+. [uv](https://docs.astral.sh/uv/) emerged as a fast, reliable Python package manager—no more conda environment headaches.
|
| 33 |
+
|
| 34 |
+
- **Old dependencies don't build anymore.** The v4 dependency stack (old PyTorch, atari-py, mujoco-py, etc.) won't compile on modern hardware, especially ARM machines (Apple Silicon, AWS Graviton). Many deprecated packages simply don't run. A full rebuild was necessary.
|
| 35 |
+
|
| 36 |
+
This release updates SLM-Lab to work with modern dependencies while preserving the educational code structure. If you've read the book, the code should still be recognizable.
|
| 37 |
+
|
| 38 |
+
### Critical: Atari v5 Sticky Actions
|
| 39 |
+
|
| 40 |
+
**SLM-Lab uses Gymnasium ALE v5 defaults.** v5 default `repeat_action_probability=0.25` (sticky actions) randomly repeats agent actions to simulate console stochasticity, making evaluation harder but more realistic than v4 default 0.0 used by most benchmarks (CleanRL, SB3, RL Zoo). This follows [Machado et al. (2018)](https://arxiv.org/abs/1709.06009) research best practices. See [ALE version history](https://ale.farama.org/environments/#version-history-and-naming-schemes).
|
| 41 |
+
|
| 42 |
+
### Summary
|
| 43 |
+
|
| 44 |
+
| v4 | v5 |
|
| 45 |
+
|----|----|
|
| 46 |
+
| `conda activate lab && python run_lab.py` | `slm-lab run` |
|
| 47 |
+
| `CartPole-v0`, `PongNoFrameskip-v4` | `CartPole-v1`, `ALE/Pong-v5` |
|
| 48 |
+
| `RoboschoolHopper-v1` | `Hopper-v5` |
|
| 49 |
+
| `agent: [{...}]`, `env: [{...}]`, `body: {...}` | `agent: {...}`, `env: {...}` |
|
| 50 |
+
| `body.state_dim`, `body.memory` | `agent.state_dim`, `agent.memory` |
|
| 51 |
+
|
| 52 |
+
---
|
| 53 |
+
|
| 54 |
+
## Migration from v4
|
| 55 |
+
|
| 56 |
+
### 1. Install
|
| 57 |
+
|
| 58 |
+
```bash
|
| 59 |
+
uv sync
|
| 60 |
+
uv tool install --editable .
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
### 2. Update specs
|
| 64 |
+
|
| 65 |
+
Remove array brackets and `body` section:
|
| 66 |
+
|
| 67 |
+
```diff
|
| 68 |
+
{
|
| 69 |
+
- "agent": [{ "name": "PPO", ... }],
|
| 70 |
+
- "env": [{ "name": "CartPole-v0", ... }],
|
| 71 |
+
- "body": { "product": "outer", "num": 1 },
|
| 72 |
+
+ "agent": { "name": "PPO", ... },
|
| 73 |
+
+ "env": { "name": "CartPole-v1", ... },
|
| 74 |
+
"meta": { ... }
|
| 75 |
+
}
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
### 3. Update environment names
|
| 79 |
+
|
| 80 |
+
- Classic control: `v0`/`v1` → current version (`CartPole-v1`, `Pendulum-v1`, `LunarLander-v3`)
|
| 81 |
+
- Atari: `PongNoFrameskip-v4` → `ALE/Pong-v5`
|
| 82 |
+
- Roboschool → MuJoCo: see [Deprecations](#roboschool) for full mapping
|
| 83 |
+
|
| 84 |
+
### 4. Run
|
| 85 |
+
|
| 86 |
+
```bash
|
| 87 |
+
slm-lab run spec.json spec_name train
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
See `slm_lab/spec/benchmark/` for updated reference specs.
|
| 91 |
+
|
| 92 |
+
---
|
| 93 |
+
|
| 94 |
+
## The Gymnasium API Change
|
| 95 |
+
|
| 96 |
+
This matters for understanding the code, not just running it.
|
| 97 |
+
|
| 98 |
+
### The Problem
|
| 99 |
+
|
| 100 |
+
Gym's `done` flag was ambiguous—it meant "episode ended" but episodes end for two different reasons:
|
| 101 |
+
|
| 102 |
+
1. **Terminated:** True end state (CartPole fell, agent died, goal reached)
|
| 103 |
+
2. **Truncated:** Time limit hit (MuJoCo's 1000-step cap)
|
| 104 |
+
|
| 105 |
+
For value estimation, these need different treatment. Terminated means future returns are zero. Truncated means future returns exist but weren't observed—you should bootstrap from V(s').
|
| 106 |
+
|
| 107 |
+
### The Fix
|
| 108 |
+
|
| 109 |
+
Gymnasium separates the signals:
|
| 110 |
+
|
| 111 |
+
```python
|
| 112 |
+
# Gym
|
| 113 |
+
obs, reward, done, info = env.step(action)
|
| 114 |
+
|
| 115 |
+
# Gymnasium
|
| 116 |
+
obs, reward, terminated, truncated, info = env.step(action)
|
| 117 |
+
```
|
| 118 |
+
|
| 119 |
+
All SLM-Lab algorithms now use `terminated` for bootstrapping decisions:
|
| 120 |
+
|
| 121 |
+
```python
|
| 122 |
+
# Only zero out future returns on TRUE termination
|
| 123 |
+
q_targets = rewards + gamma * (1 - terminateds) * next_q_preds
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
This is why the code stores `terminateds` and `truncateds` separately in memory—algorithms need `terminated` for correct bootstrapping, `done` for episode boundaries.
|
| 127 |
+
|
| 128 |
+
This fix particularly matters for time-limited environments like MuJoCo (1000-step limit) where episodes frequently truncate during training. Using `done` instead of `terminated` there significantly hurts learning.
|
| 129 |
+
|
| 130 |
+
---
|
| 131 |
+
|
| 132 |
+
## Code Structure Changes
|
| 133 |
+
|
| 134 |
+
For book readers who want to trace through the code:
|
| 135 |
+
|
| 136 |
+
### Simplified Agent Design
|
| 137 |
+
|
| 138 |
+
The `Body` class was removed. Its responsibilities moved to more natural locations:
|
| 139 |
+
|
| 140 |
+
```python
|
| 141 |
+
# v4
|
| 142 |
+
state_dim = agent.body.state_dim
|
| 143 |
+
memory = agent.body.memory
|
| 144 |
+
env = agent.body.env
|
| 145 |
+
|
| 146 |
+
# v5
|
| 147 |
+
state_dim = agent.state_dim
|
| 148 |
+
memory = agent.memory
|
| 149 |
+
env = agent.env
|
| 150 |
+
```
|
| 151 |
+
|
| 152 |
+
Training metrics tracking is now in `MetricsTracker` (what `Body` was renamed to).
|
| 153 |
+
|
| 154 |
+
### Simplified Specs
|
| 155 |
+
|
| 156 |
+
Multi-agent configurations were rarely used. Specs are now flat:
|
| 157 |
+
|
| 158 |
+
```python
|
| 159 |
+
# v4: agent_spec = spec['agent'][0]
|
| 160 |
+
# v5: agent_spec = spec['agent']
|
| 161 |
+
```
|
| 162 |
+
|
| 163 |
+
### Architecture Preserved
|
| 164 |
+
|
| 165 |
+
The core design is unchanged:
|
| 166 |
+
|
| 167 |
+
```
|
| 168 |
+
Session → Agent → Algorithm → Network
|
| 169 |
+
↘ Memory
|
| 170 |
+
→ Env
|
| 171 |
+
```
|
| 172 |
+
|
| 173 |
+
---
|
| 174 |
+
|
| 175 |
+
## Algorithm Updates
|
| 176 |
+
|
| 177 |
+
**PPO:** New options for value target handling—`normalize_v_targets`, `symlog_transform` (from DreamerV3), `clip_vloss` (CleanRL-style).
|
| 178 |
+
|
| 179 |
+
**SAC:** Discrete action support uses exact expectation (Christodoulou 2019). Target entropy auto-calculated.
|
| 180 |
+
|
| 181 |
+
**Networks:** Optional `layer_norm` for MLP hidden layers. Custom optimizers (Lookahead, RAdam) removed—use native PyTorch `AdamW`.
|
| 182 |
+
|
| 183 |
+
All algorithms use `terminated` (not `done`) for correct bootstrapping.
|
| 184 |
+
|
| 185 |
+
---
|
| 186 |
+
|
| 187 |
+
## Benchmarks
|
| 188 |
+
|
| 189 |
+
All algorithms validated on Gymnasium. Full results in `docs/BENCHMARKS.md`.
|
| 190 |
+
|
| 191 |
+
| Category | REINFORCE | SARSA | DQN | DDQN+PER | A2C | PPO | SAC |
|
| 192 |
+
|----------|-----------|-------|-----|----------|-----|-----|-----|
|
| 193 |
+
| Classic Control | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
|
| 194 |
+
| Box2D | — | — | ✅ | ✅ | ⚠️ | ✅ | ✅ |
|
| 195 |
+
| MuJoCo (11 envs) | — | — | — | — | ⚠️ | ✅ All | ✅ All |
|
| 196 |
+
| Atari (54 games) | — | — | — | — | ✅ | ✅ | — |
|
| 197 |
+
|
| 198 |
+
**Atari benchmarks** use ALE v5 with sticky actions (`repeat_action_probability=0.25`). PPO tested with lambda variants (0.95, 0.85, 0.70) to optimize per-game performance. A2C uses GAE with lambda 0.95.
|
| 199 |
+
|
| 200 |
+
**Note on scores:** Gymnasium environment versions differ from old Gym—some are harder (CartPole-v1 has stricter termination than v0), some have different reward scales (MuJoCo v5 vs roboschool). Targets reference [CleanRL](https://docs.cleanrl.dev/) and [Stable-Baselines3](https://stable-baselines3.readthedocs.io/) gymnasium benchmarks.
|
| 201 |
+
|
| 202 |
+
---
|
| 203 |
+
|
| 204 |
+
## New Features
|
| 205 |
+
|
| 206 |
+
**Hyperparameter search** now uses Ray Tune + Optuna + ASHA early stopping:
|
| 207 |
+
|
| 208 |
+
```bash
|
| 209 |
+
slm-lab run spec.json spec_name search # Run search locally
|
| 210 |
+
```
|
| 211 |
+
|
| 212 |
+
Add `search_scheduler` to spec for ASHA early termination of poor trials. See `docs/BENCHMARKS.md` for search methodology.
|
| 213 |
+
|
| 214 |
+
---
|
| 215 |
+
|
| 216 |
+
## CLI Usage
|
| 217 |
+
|
| 218 |
+
The CLI uses [Typer](https://typer.tiangolo.com/). Use `--help` on any command for details:
|
| 219 |
+
|
| 220 |
+
```bash
|
| 221 |
+
slm-lab --help # List all commands
|
| 222 |
+
slm-lab run --help # Options for run command
|
| 223 |
+
|
| 224 |
+
# Installation
|
| 225 |
+
uv sync # Install dependencies
|
| 226 |
+
uv tool install --editable . # Install slm-lab command
|
| 227 |
+
|
| 228 |
+
# Basic usage
|
| 229 |
+
slm-lab run # PPO CartPole (default demo)
|
| 230 |
+
slm-lab run --render # With visualization
|
| 231 |
+
slm-lab run spec.json spec_name train # Train from spec file
|
| 232 |
+
slm-lab run spec.json spec_name dev # Dev mode (shorter run)
|
| 233 |
+
slm-lab run spec.json spec_name search # Hyperparameter search
|
| 234 |
+
|
| 235 |
+
# Variable substitution (for template specs)
|
| 236 |
+
slm-lab run -s env=ALE/Breakout-v5 slm_lab/spec/benchmark/ppo/ppo_atari.json ppo_atari train
|
| 237 |
+
|
| 238 |
+
# Cloud training (dstack + HuggingFace)
|
| 239 |
+
slm-lab run-remote --gpu spec.json spec_name train # Launch on cloud GPU
|
| 240 |
+
slm-lab list # List experiments on HuggingFace
|
| 241 |
+
slm-lab pull spec_name # Download results locally
|
| 242 |
+
|
| 243 |
+
# Utilities
|
| 244 |
+
slm-lab run --stop-ray # Stop Ray processes
|
| 245 |
+
```
|
| 246 |
+
|
| 247 |
+
Modes: `dev` (quick test), `train` (full training), `search` (hyperparameter search), `enjoy` (evaluate saved model).
|
| 248 |
+
|
| 249 |
+
---
|
| 250 |
+
|
| 251 |
+
## Deprecations
|
| 252 |
+
|
| 253 |
+
### Multi-Agent / Multi-Environment
|
| 254 |
+
|
| 255 |
+
The v4 `body` spec section and array wrappers (`agent: [{...}]`) supported multi-agent and multi-environment configurations. These were rarely used and added complexity. v5 simplifies to single-agent single-env, which covers the vast majority of use cases and matches how most RL research is done.
|
| 256 |
+
|
| 257 |
+
### Unity ML-Agents and VizDoom
|
| 258 |
+
|
| 259 |
+
These integrations are removed from the core package. Both ecosystems have their own gymnasium-compatible wrappers now:
|
| 260 |
+
- Unity: [gymnasium-unity](https://gymnasium.farama.org/environments/third_party_environments/)
|
| 261 |
+
- VizDoom: [vizdoom gymnasium wrapper](https://gymnasium.farama.org/environments/third_party_environments/)
|
| 262 |
+
|
| 263 |
+
You can still use these environments with SLM-Lab by installing their wrappers and specifying the environment name in your spec.
|
| 264 |
+
|
| 265 |
+
### Roboschool
|
| 266 |
+
|
| 267 |
+
Roboschool is abandoned (MuJoCo became free in 2022). Use gymnasium's native MuJoCo environments instead:
|
| 268 |
+
- `RoboschoolHopper-v1` → `Hopper-v5`
|
| 269 |
+
- `RoboschoolHalfCheetah-v1` → `HalfCheetah-v5`
|
| 270 |
+
- `RoboschoolWalker2d-v1` → `Walker2d-v5`
|
| 271 |
+
- `RoboschoolAnt-v1` → `Ant-v5`
|
| 272 |
+
- `RoboschoolHumanoid-v1` → `Humanoid-v5`
|
docs/plots/Acrobot-v1_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/AirRaid-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Alien-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Amidar-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Ant-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Assault-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Asterix-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Asteroids-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Atlantis-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/BankHeist-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/BattleZone-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/BeamRider-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Berzerk-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Bowling-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Boxing-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Breakout-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Carnival-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/CartPole-v1_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Centipede-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/ChopperCommand-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/CrazyClimber-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Defender-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/DemonAttack-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/DoubleDunk-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Enduro-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/FishingDerby-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Freeway-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Frostbite-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Gopher-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Gravitar-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/HalfCheetah-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Hero-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Hopper-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Humanoid-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/HumanoidStandup-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/IceHockey-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/InvertedDoublePendulum-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/InvertedPendulum-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/Jamesbond-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/JourneyEscape-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Kangaroo-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Krull-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/KungFuMaster-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/LunarLander-v3_Continuous_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/LunarLander-v3_Discrete_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
docs/plots/MsPacman-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/NameThisGame-v5_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Pendulum-v1_multi_trial_graph_mean_returns_ma_vs_frames.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|