|
|
--- |
|
|
library_name: stable-baselines3 |
|
|
tags: |
|
|
- LunarLander-v2 |
|
|
- deep-reinforcement-learning |
|
|
- reinforcement-learning |
|
|
- stable-baselines3 |
|
|
model-index: |
|
|
- name: ppo |
|
|
results: |
|
|
- task: |
|
|
type: reinforcement-learning |
|
|
name: reinforcement-learning |
|
|
dataset: |
|
|
name: LunarLander-v2 |
|
|
type: LunarLander-v2 |
|
|
metrics: |
|
|
- type: mean_reward |
|
|
value: 249.17 +/- 16.55 |
|
|
name: mean_reward |
|
|
verified: false |
|
|
--- |
|
|
|
|
|
# **ppo** Agent playing **LunarLander-v2** |
|
|
This is a trained model of a **ppo** agent playing **LunarLander-v2** |
|
|
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). |
|
|
|
|
|
## Usage (with Stable-baselines3) |
|
|
|
|
|
```python |
|
|
# In Colab, install packages if required: |
|
|
# gymnasium[box2d]: Contains the LunarLander-v2 environment 🌛 |
|
|
# stable-baselines3[extra]: The deep reinforcement learning library. |
|
|
# huggingface_sb3: Additional code for Stable-baselines3 to load and upload models from the Hugging Face 🤗 Hub. |
|
|
!apt install swig cmake |
|
|
!pip install gymnasium[box2d] stable_baselines3[extra] huggingface-sb3 |
|
|
|
|
|
import gymnasium as gym |
|
|
from huggingface_sb3 import load_from_hub |
|
|
from stable_baselines3 import PPO |
|
|
from stable_baselines3.common.vec_env import DummyVecEnv |
|
|
from stable_baselines3.common.evaluation import evaluate_policy |
|
|
from stable_baselines3.common.monitor import Monitor |
|
|
|
|
|
# Create the evaluation environment |
|
|
env_id = "LunarLander-v2" |
|
|
eval_env = Monitor(gym.make(env_id), filename="./video.mp4") |
|
|
|
|
|
# Load saved agent |
|
|
repo_id = "davidkh/ppo-LunarLander-v2" |
|
|
filename = "ppo-LunarLander-v2.zip" |
|
|
checkpoint = load_from_hub(repo_id, filename) |
|
|
model = PPO.load(checkpoint, print_system_info=True) |
|
|
|
|
|
# Evaluation |
|
|
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True) |
|
|
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") |
|
|
|
|
|
|
|
|
# Example use of the trained agent |
|
|
observation, info = eval_env.reset() |
|
|
for _ in range(1000): |
|
|
eval_env.render() |
|
|
action, _states = model.predict(observation, deterministic=True) |
|
|
observation, rewards, terminated, truncated, info = eval_env.step(action) |
|
|
if terminated or truncated: |
|
|
print("Environment is reset") |
|
|
observation, info = eval_env.reset() |
|
|
|
|
|
eval_env.close() |
|
|
|
|
|
... |
|
|
``` |
|
|
|