File size: 2,611 Bytes
6173d8d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
---
library_name: stable-baselines3
tags:
- FetchPickAndPlaceDense-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FetchPickAndPlaceDense-v4
type: FetchPickAndPlaceDense-v4
metrics:
- type: mean_reward
value: -11.01 +/- 5.26
name: mean_reward
verified: false
license: mit
language:
- en
---
# **DDPG** Agent playing **FetchPickAndPlaceDense-v4**
- [Github Repository](https://github.com/kuds/rl-fetch)
- [Google Colab Notebook](https://colab.research.google.com/github/kuds/rl-fetch/blob/main/Fetch/Pick%20and%20Place/%5BFetch%20Pick%20%26%20Place%5D%20Deep%20Deterministic%20Policy%20Gradient%20(DDPG).ipynb)
- [Finding Theta - Blog Post](https://www.findingtheta.com/blog/mastering-robotic-manipulation-with-reinforcement-learning-tqc-and-ddpg-for-fetch-environments)
Then, you can load the model using the following Python code:
```python
import gymnasium as gym
from stable_baselines3 import DDPG
from stable_baselines3.common.env_util import make_vec_env
gymnasium.register_envs(gymnasium_robotics)
# Load the trained model
model = DDPG.load("best-model.zip")
# Create the environment
env = make_vec_env("FetchPickAndPlaceDense-v4", n_envs=1)
# Reset the environment
obs, info = env.reset()
# Enjoy the trained agent
for _ in range(1000):
action, _states = model.predict(obs, deterministic=True)
obs, rewards, terminated, truncated, info = env.step(action)
if terminated or truncated:
obs, info = env.reset()
env.render()
env.close()
```
### Hugging Face Hub
You can also use the Hugging Face Hub to load the model. First, you need to install the Hugging Face Hub library:
```bash
pip install huggingface_hub
```
Then, you can load the model from the hub using the following code:
```python
from huggingface_hub import hf_hub_download
import torch as th
from stable_baselines3 import DDPG
from stable_baselines3.common.env_util import make_vec_env
gymnasium.register_envs(gymnasium_robotics)
# Download the model from the Hub
model_path = hf_hub_download(repo_id="kuds/fetch-pick-place-dense-ddpg", filename="best-model.zip")
# Load the model
model = DDPG.load(model_path)
# Create the environment
env = make_vec_env("FetchPickAndPlaceDense-v4", n_envs=1)
# Enjoy the trained agent
obs = env.reset()
for i in range(1000):
action, _states = model.predict(obs, deterministic=True)
obs, rewards, dones, info = env.step(action)
env.render("human")
``` |