kengzwl commited on
Commit
ce18ee3
·
verified ·
1 Parent(s): 23e2556

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +89 -3
README.md CHANGED
@@ -1,3 +1,89 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SLM Lab <br> ![GitHub tag (latest SemVer)](https://img.shields.io/github/tag/kengz/slm-lab) ![CI](https://github.com/kengz/SLM-Lab/workflows/CI/badge.svg)
2
+
3
+
4
+ <p align="center">
5
+ <i>Modular Deep Reinforcement Learning framework in PyTorch.</i>
6
+ <br><br>
7
+ <a href="docs/BENCHMARKS.md"><b>Benchmark Results</b></a> · <a href="https://slm-lab.gitbook.io/slm-lab/">Documentation</a> · <a href="CHANGELOG.md">Changelog</a>
8
+ <br><br>
9
+ </p>
10
+
11
+ >**NOTE:** v5.0 updates to Gymnasium, `uv` tooling, and modern dependencies with ARM support - see [CHANGELOG.md](CHANGELOG.md).
12
+ >
13
+ >Book readers: `git checkout v4.1.1` for *Foundations of Deep Reinforcement Learning* code.
14
+
15
+ |||||
16
+ |:---:|:---:|:---:|:---:|
17
+ | ![ppo beamrider](https://user-images.githubusercontent.com/8209263/63994698-689ecf00-caaa-11e9-991f-0a5e9c2f5804.gif) | ![ppo breakout](https://user-images.githubusercontent.com/8209263/63994695-650b4800-caaa-11e9-9982-2462738caa45.gif) | ![ppo kungfumaster](https://user-images.githubusercontent.com/8209263/63994690-60469400-caaa-11e9-9093-b1cd38cee5ae.gif) | ![ppo mspacman](https://user-images.githubusercontent.com/8209263/63994685-5cb30d00-caaa-11e9-8f35-78e29a7d60f5.gif) |
18
+ | BeamRider | Breakout | KungFuMaster | MsPacman |
19
+ | ![ppo pong](https://user-images.githubusercontent.com/8209263/63994680-59b81c80-caaa-11e9-9253-ed98370351cd.gif) | ![ppo qbert](https://user-images.githubusercontent.com/8209263/63994672-54f36880-caaa-11e9-9757-7780725b53af.gif) | ![ppo seaquest](https://user-images.githubusercontent.com/8209263/63994665-4dcc5a80-caaa-11e9-80bf-c21db818115b.gif) | ![ppo spaceinvaders](https://user-images.githubusercontent.com/8209263/63994624-15c51780-caaa-11e9-9c9a-854d3ce9066d.gif) |
20
+ | Pong | Qbert | Seaquest | Sp.Invaders |
21
+ | ![sac ant](https://user-images.githubusercontent.com/8209263/63994867-ff6b8b80-caaa-11e9-971e-2fac1cddcbac.gif) | ![sac halfcheetah](https://user-images.githubusercontent.com/8209263/63994869-01354f00-caab-11e9-8e11-3893d2c2419d.gif) | ![sac hopper](https://user-images.githubusercontent.com/8209263/63994871-0397a900-caab-11e9-9566-4ca23c54b2d4.gif) | ![sac humanoid](https://user-images.githubusercontent.com/8209263/63994883-0befe400-caab-11e9-9bcc-c30c885aad73.gif) |
22
+ | Ant | HalfCheetah | Hopper | Humanoid |
23
+ | ![sac doublependulum](https://user-images.githubusercontent.com/8209263/63994879-07c3c680-caab-11e9-974c-06cdd25bfd68.gif) | ![sac pendulum](https://user-images.githubusercontent.com/8209263/63994880-085c5d00-caab-11e9-850d-049401540e3b.gif) | ![sac reacher](https://user-images.githubusercontent.com/8209263/63994881-098d8a00-caab-11e9-8e19-a3b32d601b10.gif) | ![sac walker](https://user-images.githubusercontent.com/8209263/63994882-0abeb700-caab-11e9-9e19-b59dc5c43393.gif) |
24
+ | Inv.DoublePendulum | InvertedPendulum | Reacher | Walker |
25
+
26
+ ## Quick Start
27
+
28
+ ```bash
29
+ # Install
30
+ uv sync
31
+ uv tool install --editable .
32
+
33
+ # Run demo (PPO CartPole)
34
+ slm-lab run # PPO CartPole
35
+ slm-lab run --render # with visualization
36
+
37
+ # Run custom experiment
38
+ slm-lab run spec.json spec_name train # local training
39
+ slm-lab run-remote spec.json spec_name train # cloud training (dstack)
40
+
41
+ # Help (CLI uses Typer)
42
+ slm-lab --help # list all commands
43
+ slm-lab run --help # options for run command
44
+
45
+ # Troubleshoot: if slm-lab not found, use uv run
46
+ uv run slm-lab run
47
+ ```
48
+
49
+ ## Features
50
+
51
+ - **Algorithms**: DQN, DDQN+PER, A2C, PPO, SAC and variants
52
+ - **Environments**: Gymnasium (Atari, MuJoCo, Box2D)
53
+ - **Networks**: MLP, ConvNet, RNN with flexible architectures
54
+ - **Hyperparameter Search**: ASHA scheduler with Ray Tune
55
+ - **Cloud Training**: dstack integration with auto HuggingFace sync
56
+
57
+ ## Cloud Training (dstack)
58
+
59
+ Run experiments on cloud GPUs with automatic result sync to HuggingFace.
60
+
61
+ ```bash
62
+ # Setup
63
+ cp .env.example .env # Add HF_TOKEN
64
+ uv tool install dstack # Install dstack CLI
65
+ # Configure dstack server - see https://dstack.ai/docs/quickstart
66
+
67
+ # Run on cloud
68
+ slm-lab run-remote spec.json spec_name train # CPU training (default)
69
+ slm-lab run-remote spec.json spec_name search # CPU ASHA search (default)
70
+ slm-lab run-remote --gpu spec.json spec_name train # GPU training (for image envs)
71
+
72
+ # Sync results
73
+ slm-lab pull spec_name # Download from HuggingFace
74
+ slm-lab list # List available experiments
75
+ ```
76
+
77
+ Config options in `.dstack/`: `run-gpu-train.yml`, `run-gpu-search.yml`, `run-cpu-train.yml`, `run-cpu-search.yml`
78
+
79
+ ### Minimal Install (Orchestration Only)
80
+
81
+ For a lightweight box that only dispatches dstack runs, syncs results, and generates plots (no local ML training):
82
+
83
+ ```bash
84
+ uv sync --no-default-groups
85
+ uv run --no-default-groups slm-lab run-remote spec.json spec_name train
86
+ uv run --no-default-groups slm-lab pull spec_name
87
+ uv run --no-default-groups slm-lab plot -f folder1,folder2
88
+ ```
89
+