Datasets:
Tasks:
Reinforcement Learning
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
1K - 10K
License:
File size: 5,067 Bytes
7e7099b ce18ee3 c87fa0e 90c5ef7 ce18ee3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
---
license: mit
language:
- en
pretty_name: SLM-Lab Benchmark Results
tags:
- reinforcement-learning
- deep-learning
- pytorch
task_categories:
- reinforcement-learning
---
# SLM Lab <br>  
<p align="center">
<i>Modular Deep Reinforcement Learning framework in PyTorch.</i>
<br>
<a href="https://slm-lab.gitbook.io/slm-lab/">Documentation</a> · <a href="https://github.com/kengz/SLM-Lab/blob/master/docs/BENCHMARKS.md">Benchmark Results</a>
</p>
>**NOTE:** v5.0 updates to Gymnasium, `uv` tooling, and modern dependencies with ARM support - see [CHANGELOG.md](CHANGELOG.md).
>
>Book readers: `git checkout v4.1.1` for *Foundations of Deep Reinforcement Learning* code.
|||||
|:---:|:---:|:---:|:---:|
|  |  |  |  |
| BeamRider | Breakout | KungFuMaster | MsPacman |
|  |  |  |  |
| Pong | Qbert | Seaquest | Sp.Invaders |
|  |  |  |  |
| Ant | HalfCheetah | Hopper | Humanoid |
|  |  |  |  |
| Inv.DoublePendulum | InvertedPendulum | Reacher | Walker |
## Quick Start
```bash
# Install
uv sync
uv tool install --editable .
# Run demo (PPO CartPole)
slm-lab run # PPO CartPole
slm-lab run --render # with visualization
# Run custom experiment
slm-lab run spec.json spec_name train # local training
slm-lab run-remote spec.json spec_name train # cloud training (dstack)
# Help (CLI uses Typer)
slm-lab --help # list all commands
slm-lab run --help # options for run command
# Troubleshoot: if slm-lab not found, use uv run
uv run slm-lab run
```
## Features
- **Algorithms**: DQN, DDQN+PER, A2C, PPO, SAC and variants
- **Environments**: Gymnasium (Atari, MuJoCo, Box2D)
- **Networks**: MLP, ConvNet, RNN with flexible architectures
- **Hyperparameter Search**: ASHA scheduler with Ray Tune
- **Cloud Training**: dstack integration with auto HuggingFace sync
## Cloud Training (dstack)
Run experiments on cloud GPUs with automatic result sync to HuggingFace.
```bash
# Setup
cp .env.example .env # Add HF_TOKEN
uv tool install dstack # Install dstack CLI
# Configure dstack server - see https://dstack.ai/docs/quickstart
# Run on cloud
slm-lab run-remote spec.json spec_name train # CPU training (default)
slm-lab run-remote spec.json spec_name search # CPU ASHA search (default)
slm-lab run-remote --gpu spec.json spec_name train # GPU training (for image envs)
# Sync results
slm-lab pull spec_name # Download from HuggingFace
slm-lab list # List available experiments
```
Config options in `.dstack/`: `run-gpu-train.yml`, `run-gpu-search.yml`, `run-cpu-train.yml`, `run-cpu-search.yml`
### Minimal Install (Orchestration Only)
For a lightweight box that only dispatches dstack runs, syncs results, and generates plots (no local ML training):
```bash
uv sync --no-default-groups
uv run --no-default-groups slm-lab run-remote spec.json spec_name train
uv run --no-default-groups slm-lab pull spec_name
uv run --no-default-groups slm-lab plot -f folder1,folder2
```
|