Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
MJ-Bench
/
DDPO-alignment-gpt-4o
like
0
Follow
MJ-Bench-Team
12
Text-to-Image
stable-diffusion
stable-diffusion-diffusers
DDPO
arxiv:
2407.04842
Model card
Files
Files and versions
xet
Community
main
DDPO-alignment-gpt-4o
9.84 MB
2 contributors
History:
12 commits
yichaodu
Upload README.md with huggingface_hub
bdc898c
verified
over 1 year ago
.gitattributes
1.52 kB
initial commit
over 1 year ago
README.md
1.58 kB
Upload README.md with huggingface_hub
over 1 year ago
optimizer.bin
6.59 MB
xet
Upload optimizer.bin with huggingface_hub
over 1 year ago
pytorch_lora_weights.safetensors
3.23 MB
xet
Upload pytorch_lora_weights.safetensors with huggingface_hub
over 1 year ago
random_states_0.pkl
14.3 kB
xet
Upload random_states_0.pkl with huggingface_hub
over 1 year ago
scaler.pt
988 Bytes
xet
Upload scaler.pt with huggingface_hub
over 1 year ago