File size: 1,774 Bytes
039bfb5
 
 
 
 
 
8a93911
 
 
039bfb5
 
 
 
 
 
 
 
 
 
 
8a93911
039bfb5
 
 
8a93911
039bfb5
8a93911
039bfb5
8a93911
 
 
 
 
 
 
 
 
039bfb5
 
 
8a93911
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
039bfb5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- reinforcement-learning
- stable-baselines3
- a2c
- deep-rl
- panda-gym
model-index:
- name: A2C
  results:
  - task:
      type: reinforcement-learning
      name: reinforcement-learning
    dataset:
      name: PandaReachDense-v3
      type: PandaReachDense-v3
    metrics:
    - type: mean_reward
      value: 0.00 +/- 0.00 # 请根据你之前的 print 结果修改这里
      name: mean_reward
---

# A2C Agent playing PandaReachDense-v3

This is a trained model of an **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) library and the [panda-gym](https://github.com/qgallouedec/panda-gym) environment.


## Usage (with huggingface_sb3)

To use this model, you need to install the following dependencies:

```python
pip install stable-baselines3 huggingface_sb3 panda_gym shimmy

Then you can load and evaluate the model:

```python
from huggingface_sb3 import load_from_hub
from stable_baselines3 import A2C
from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize

# Load the model and statistics
repo_id = "LuckLin/a2c-PandaReachDense-v3"
filename = "a2c-PandaReachDense-v3.zip"

checkpoint = load_from_hub(repo_id, filename)
model = A2C.load(checkpoint)

# Load the normalization statistics
stats_path = load_from_hub(repo_id, "vec_normalize.pkl")
env = DummyVecEnv([lambda: gym.make("PandaReachDense-v3")])
env = VecNormalize.load(stats_path, env)

# At test time, we don't update the stats
env.training = False
env.norm_reward = False

# Evaluate
obs = env.reset()
for _ in range(1000):
    action, _states = model.predict(obs, deterministic=True)
    obs, rewards, dones, info = env.step(action)
    env.render()