| | --- |
| | library_name: stable-baselines3 |
| | tags: |
| | - PandaReachDense-v2 |
| | - deep-reinforcement-learning |
| | - reinforcement-learning |
| | - stable-baselines3 |
| | model-index: |
| | - name: A2C |
| | results: |
| | - task: |
| | type: reinforcement-learning |
| | name: reinforcement-learning |
| | dataset: |
| | name: PandaReachDense-v2 |
| | type: PandaReachDense-v2 |
| | metrics: |
| | - type: mean_reward |
| | value: -1.17 +/- 0.32 |
| | name: mean_reward |
| | verified: false |
| | --- |
| | |
| | # **A2C** Agent playing **PandaReachDense-v2** |
| | This is a trained model of a **A2C** agent playing **PandaReachDense-v2** |
| | using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). |
| | This paper presents panda-gym, a set of Reinforcement Learning (RL) environments for the Franka Emika Panda robot integrated with OpenAI Gym. Five tasks are included: reach, push, slide, pick & place and stack. They all follow a Multi-Goal RL framework, allowing to use goal-oriented RL algorithms. To foster open-research, we chose to use the open-source physics engine PyBullet. The implementation chosen for this package allows to define very easily new tasks or new robots. This paper also presents a baseline of results obtained with state-of-the-art model-free off-policy algorithms. panda-gym is open-source and freely available at https://github.com/qgallouedec/panda-gym. |
| |
|
| | ## Usage (with Stable-baselines3) |
| | TODO: Add your code |
| |
|
| |
|
| | ```python |
| | from stable_baselines3 import ... |
| | from huggingface_sb3 import load_from_hub |
| | |
| | ... |
| | ``` |
| |
|