Instructions to use Felix555/a2c-PandaReachDense-v2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- stable-baselines3
How to use Felix555/a2c-PandaReachDense-v2 with stable-baselines3:
from huggingface_sb3 import load_from_hub checkpoint = load_from_hub( repo_id="Felix555/a2c-PandaReachDense-v2", filename="{MODEL FILENAME}.zip", ) - Notebooks
- Google Colab
- Kaggle
A2C Agent playing PandaReachDense-v2
This is a trained model of a A2C agent playing PandaReachDense-v2 using the stable-baselines3 library. This paper presents panda-gym, a set of Reinforcement Learning (RL) environments for the Franka Emika Panda robot integrated with OpenAI Gym. Five tasks are included: reach, push, slide, pick & place and stack. They all follow a Multi-Goal RL framework, allowing to use goal-oriented RL algorithms. To foster open-research, we chose to use the open-source physics engine PyBullet. The implementation chosen for this package allows to define very easily new tasks or new robots. This paper also presents a baseline of results obtained with state-of-the-art model-free off-policy algorithms. panda-gym is open-source and freely available at https://github.com/qgallouedec/panda-gym.
Usage (with Stable-baselines3)
TODO: Add your code
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
- Downloads last month
- -
Evaluation results
- mean_reward on PandaReachDense-v2self-reported-1.17 +/- 0.32