PPO Agent playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1 using the stable-baselines3 library.
Trained on CartPole-v1 in Colab as part of the Hugging Face deep RL course; Box2D environments (for example LunarLander) can be unreliable on Colab’s current Python runtime, so CartPole is used for a clean, reproducible baseline.
Usage (with Stable-baselines3)
import gymnasium as gym
from stable_baselines3 import PPO
from huggingface_sb3 import load_from_hub
# Download model files from the Hub
checkpoint_path = load_from_hub(
repo_id="trewby/ppo-CartPole-v1",
filename="ppo-CartPole-v1.zip",
)
# Load the model
model = PPO.load(checkpoint_path)
# Run one episode
env = gym.make("CartPole-v1", render_mode="human") # use "rgb_array" if you do not want a window
obs, info = env.reset()
done = False
while not done:
action, _ = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, info = env.step(action)
done = terminated or truncated
env.close()
- Downloads last month
- -
Evaluation results
- mean_reward on CartPole-v1self-reported500.00 +/- 0.00