Reinforcement Learning
ml-agents
TensorBoard
ONNX
unity-ml-agents
deep-reinforcement-learning
ML-Agents-Huggy
Instructions to use bitcloud2/ppo-Huggy with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- ml-agents
How to use bitcloud2/ppo-Huggy with ml-agents:
mlagents-load-from-hf --repo-id="bitcloud2/ppo-Huggy" --local-dir="./download: string[]s"
- Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@
|
|
| 24 |
### Watch your Agent play
|
| 25 |
You can watch your agent **playing directly in your browser:**.
|
| 26 |
|
| 27 |
-
1. Go to https://huggingface.co/spaces/
|
| 28 |
2. Step 1: Write your model_id: bitcloud2/ppo-Huggy
|
| 29 |
3. Step 2: Select your *.nn /*.onnx file
|
| 30 |
4. Click on Watch the agent play 👀
|
|
|
|
| 24 |
### Watch your Agent play
|
| 25 |
You can watch your agent **playing directly in your browser:**.
|
| 26 |
|
| 27 |
+
1. Go to https://huggingface.co/spaces/ThomasSimonini/Huggy
|
| 28 |
2. Step 1: Write your model_id: bitcloud2/ppo-Huggy
|
| 29 |
3. Step 2: Select your *.nn /*.onnx file
|
| 30 |
4. Click on Watch the agent play 👀
|