Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
migaku4649
/
dpo_lora_model2
like
0
Text Generation
Transformers
Safetensors
u-10bei/dpo-dataset-qwen-cot
English
dpo
unsloth
qwen
alignment
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
dpo_lora_model2
153 MB
Ctrl+K
Ctrl+K
1 contributor
History:
2 commits
migaku4649
Upload full DPO merged model
f1b0a19
verified
about 1 month ago
.gitattributes
Safe
1.57 kB
Upload full DPO merged model
about 1 month ago
README.md
Safe
1.89 kB
Upload full DPO merged model
about 1 month ago
adapter_config.json
885 Bytes
Upload full DPO merged model
about 1 month ago
adapter_model.safetensors
137 MB
xet
Upload full DPO merged model
about 1 month ago
added_tokens.json
Safe
707 Bytes
Upload full DPO merged model
about 1 month ago
chat_template.jinja
Safe
2.51 kB
Upload full DPO merged model
about 1 month ago
merges.txt
Safe
1.67 MB
Upload full DPO merged model
about 1 month ago
special_tokens_map.json
Safe
614 Bytes
Upload full DPO merged model
about 1 month ago
tokenizer.json
Safe
11.4 MB
xet
Upload full DPO merged model
about 1 month ago
tokenizer_config.json
Safe
5.43 kB
Upload full DPO merged model
about 1 month ago
training_args.bin
6.8 kB
xet
Upload full DPO merged model
about 1 month ago
vocab.json
Safe
2.78 MB
Upload full DPO merged model
about 1 month ago