Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
imerino
/
PPO-Pixelcopter-PLE-v0
like
0
Reinforcement Learning
Pixelcopter-PLE-v0
reinforce
custom-implementation
deep-rl-class
Eval Results
Model card
Files
Files and versions
xet
Community
main
PPO-Pixelcopter-PLE-v0
162 kB
1 contributor
History:
4 commits
imerino
Upload folder using huggingface_hub
d805c56
about 2 years ago
.gitattributes
1.52 kB
initial commit
about 2 years ago
README.md
734 Bytes
Upload folder using huggingface_hub
about 2 years ago
hyperparameters.json
118 Bytes
Upload folder using huggingface_hub
about 2 years ago
model.pt
144 kB
xet
Upload folder using huggingface_hub
about 2 years ago
replay.mp4
15.1 kB
Upload folder using huggingface_hub
about 2 years ago
results.json
129 Bytes
Upload folder using huggingface_hub
about 2 years ago