Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

atorre
/
ppo-Huggy

Reinforcement Learning
ml-agents
TensorBoard
ONNX
unity-ml-agents
deep-reinforcement-learning
ML-Agents-Huggy
Model card Files Files and versions
xet
Metrics Training metrics Community
ppo-Huggy
221 MB
  • 1 contributor
History: 2 commits
atorre's picture
atorre
First training of PPO agent on Huggy environment.
70cf848 about 3 years ago
  • Huggy
    First training of PPO agent on Huggy environment. about 3 years ago
  • run_logs
    First training of PPO agent on Huggy environment. about 3 years ago
  • .gitattributes
    1.48 kB
    initial commit about 3 years ago
  • Huggy.onnx
    2.27 MB
    xet
    First training of PPO agent on Huggy environment. about 3 years ago
  • README.md
    1 kB
    First training of PPO agent on Huggy environment. about 3 years ago
  • config.json
    1.64 kB
    First training of PPO agent on Huggy environment. about 3 years ago
  • configuration.yaml
    1.69 kB
    First training of PPO agent on Huggy environment. about 3 years ago