Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
statking
/
zephyr-7b-dpo-full
like
0
Text Generation
Transformers
Safetensors
HuggingFaceH4/ultrafeedback_binarized
mistral
alignment-handbook
trl
dpo
Generated from Trainer
conversational
text-generation-inference
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
zephyr-7b-dpo-full
14.5 GB
1 contributor
History:
7 commits
statking
End of training
9ccfc21
verified
almost 2 years ago
.gitattributes
1.52 kB
initial commit
almost 2 years ago
README.md
2.83 kB
End of training
almost 2 years ago
all_results.json
762 Bytes
End of training
almost 2 years ago
config.json
640 Bytes
End of training
almost 2 years ago
eval_results.json
564 Bytes
End of training
almost 2 years ago
generation_config.json
111 Bytes
Model save
almost 2 years ago
model-00001-of-00003.safetensors
4.94 GB
xet
Model save
almost 2 years ago
model-00002-of-00003.safetensors
5 GB
xet
Model save
almost 2 years ago
model-00003-of-00003.safetensors
4.54 GB
xet
Model save
almost 2 years ago
model.safetensors.index.json
24 kB
Model save
almost 2 years ago
special_tokens_map.json
551 Bytes
Training in progress, step 100
almost 2 years ago
tokenizer.json
1.8 MB
Training in progress, step 100
almost 2 years ago
tokenizer.model
493 kB
xet
Training in progress, step 100
almost 2 years ago
tokenizer_config.json
1.39 kB
Training in progress, step 100
almost 2 years ago
train_results.json
218 Bytes
Model save
almost 2 years ago
trainer_state.json
28.5 kB
Model save
almost 2 years ago
training_args.bin
6.26 kB
xet
Training in progress, step 100
almost 2 years ago