Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
tpo-alignment
/
Mistral-Instruct-7B-TPO-y3
like
0
Follow
TPO
5
Safetensors
princeton-nlp/mistral-instruct-ultrafeedback
mistral
alignment-handbook
Generated from Trainer
arxiv:
2405.16681
License:
mit
Model card
Files
Files and versions
xet
Community
main
Mistral-Instruct-7B-TPO-y3
Commit History
Update README.md
7f0a64b
verified
sahsaeedi
commited on
Feb 19, 2025
Upload tokenizer
b375482
verified
sahsaeedi
commited on
Jan 23, 2025
Upload MistralForCausalLM
2a8914f
verified
sahsaeedi
commited on
Jan 23, 2025
initial commit
f8f7337
verified
sahsaeedi
commited on
Jan 23, 2025