How to use vadhri/deeprl_ppo_lunarlander with stable-baselines3:
from huggingface_sb3 import load_from_hub checkpoint = load_from_hub( repo_id="vadhri/deeprl_ppo_lunarlander", filename="{MODEL FILENAME}.zip", )
This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library.
TODO: Add your code
from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ...