Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
DUAL-GPO
/
phi-2-gpo-iter-1
like
0
Follow
DUAL Group
2
PEFT
Safetensors
HuggingFaceH4/ultrafeedback_binarized
phi
alignment-handbook
Generated from Trainer
trl
dpo
custom_code
License:
mit
Model card
Files
Files and versions
xet
Community
Use this model
main
phi-2-gpo-iter-1
45.4 MB
1 contributor
History:
4 commits
lole25
End of training
dd5eeed
verified
almost 2 years ago
.gitattributes
1.52 kB
initial commit
almost 2 years ago
README.md
2.22 kB
End of training
almost 2 years ago
adapter_config.json
595 Bytes
Training in progress, step 100
almost 2 years ago
adapter_model.safetensors
42 MB
xet
Model save
almost 2 years ago
added_tokens.json
1.08 kB
Training in progress, step 100
almost 2 years ago
all_results.json
751 Bytes
Model save
almost 2 years ago
config.json
898 Bytes
End of training
almost 2 years ago
eval_results.json
575 Bytes
Model save
almost 2 years ago
merges.txt
456 kB
Training in progress, step 100
almost 2 years ago
special_tokens_map.json
587 Bytes
Training in progress, step 100
almost 2 years ago
tokenizer.json
2.11 MB
Training in progress, step 100
almost 2 years ago
tokenizer_config.json
7.82 kB
Training in progress, step 100
almost 2 years ago
train_results.json
197 Bytes
Model save
almost 2 years ago
trainer_state.json
7.55 kB
Model save
almost 2 years ago
training_args.bin
5.82 kB
xet
Training in progress, step 100
almost 2 years ago
vocab.json
798 kB
Training in progress, step 100
almost 2 years ago