Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
1k
1k
End of preview. Expand in Data Studio

SLM Lab
GitHub tag (latest SemVer) CI

Modular Deep Reinforcement Learning framework in PyTorch.
Documentation · Benchmark Results

NOTE: v5.0 updates to Gymnasium, uv tooling, and modern dependencies with ARM support - see CHANGELOG.md.

Book readers: git checkout v4.1.1 for Foundations of Deep Reinforcement Learning code.

ppo beamrider ppo breakout ppo kungfumaster ppo mspacman
BeamRider Breakout KungFuMaster MsPacman
ppo pong ppo qbert ppo seaquest ppo spaceinvaders
Pong Qbert Seaquest Sp.Invaders
sac ant sac halfcheetah sac hopper sac humanoid
Ant HalfCheetah Hopper Humanoid
sac doublependulum sac pendulum sac reacher sac walker
Inv.DoublePendulum InvertedPendulum Reacher Walker

Quick Start

# Install
uv sync
uv tool install --editable .

# Run demo (PPO CartPole)
slm-lab run                                    # PPO CartPole
slm-lab run --render                           # with visualization

# Run custom experiment
slm-lab run spec.json spec_name train          # local training
slm-lab run-remote spec.json spec_name train   # cloud training (dstack)

# Help (CLI uses Typer)
slm-lab --help                                 # list all commands
slm-lab run --help                             # options for run command

# Troubleshoot: if slm-lab not found, use uv run
uv run slm-lab run

Features

  • Algorithms: DQN, DDQN+PER, A2C, PPO, SAC and variants
  • Environments: Gymnasium (Atari, MuJoCo, Box2D)
  • Networks: MLP, ConvNet, RNN with flexible architectures
  • Hyperparameter Search: ASHA scheduler with Ray Tune
  • Cloud Training: dstack integration with auto HuggingFace sync

Cloud Training (dstack)

Run experiments on cloud GPUs with automatic result sync to HuggingFace.

# Setup
cp .env.example .env  # Add HF_TOKEN
uv tool install dstack  # Install dstack CLI
# Configure dstack server - see https://dstack.ai/docs/quickstart

# Run on cloud
slm-lab run-remote spec.json spec_name train           # CPU training (default)
slm-lab run-remote spec.json spec_name search          # CPU ASHA search (default)
slm-lab run-remote --gpu spec.json spec_name train     # GPU training (for image envs)

# Sync results
slm-lab pull spec_name    # Download from HuggingFace
slm-lab list              # List available experiments

Config options in .dstack/: run-gpu-train.yml, run-gpu-search.yml, run-cpu-train.yml, run-cpu-search.yml

Minimal Install (Orchestration Only)

For a lightweight box that only dispatches dstack runs, syncs results, and generates plots (no local ML training):

uv sync --no-default-groups
uv run --no-default-groups slm-lab run-remote spec.json spec_name train
uv run --no-default-groups slm-lab pull spec_name
uv run --no-default-groups slm-lab plot -f folder1,folder2
Downloads last month
853