LuckLin's picture
Upload README.md with huggingface_hub
8f02a2e verified
metadata
library_name: stable-baselines3
tags:
  - PandaReachDense-v3
  - reinforcement-learning
  - stable-baselines3
  - a2c
  - deep-rl
  - panda-gym
model-index:
  - name: A2C
    results:
      - task:
          type: reinforcement-learning
          name: reinforcement-learning
        dataset:
          name: PandaReachDense-v3
          type: PandaReachDense-v3
        metrics:
          - type: mean_reward
            value: 0.00 +/- 0.00
            name: mean_reward

A2C Agent playing PandaReachDense-v3

This is a trained model of an A2C agent playing PandaReachDense-v3 using the stable-baselines3 library and the panda-gym environment.

Usage (with huggingface_sb3)

To use this model, you need to install the following dependencies:

pip install stable-baselines3 huggingface_sb3 panda_gym shimmy

Then you can load and evaluate the model:

```python
from huggingface_sb3 import load_from_hub
from stable_baselines3 import A2C
from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize

# Load the model and statistics
repo_id = "LuckLin/a2c-PandaReachDense-v3"
filename = "a2c-PandaReachDense-v3.zip"

checkpoint = load_from_hub(repo_id, filename)
model = A2C.load(checkpoint)

# Load the normalization statistics
stats_path = load_from_hub(repo_id, "vec_normalize.pkl")
env = DummyVecEnv([lambda: gym.make("PandaReachDense-v3")])
env = VecNormalize.load(stats_path, env)

# At test time, we don't update the stats
env.training = False
env.norm_reward = False

# Evaluate
obs = env.reset()
for _ in range(1000):
    action, _states = model.predict(obs, deterministic=True)
    obs, rewards, dones, info = env.step(action)
    env.render()