Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
mastersubhajit
/
DPO
like
0
Text Generation
PEFT
Safetensors
jondurbin/truthy-dpo-v0.1
dpo
alignment
truthfulness
lora
qwen2
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Use this model
main
DPO
20.2 MB
1 contributor
History:
5 commits
mastersubhajit
README
96c3da9
verified
5 days ago
.gitattributes
Safe
1.52 kB
initial commit
5 days ago
README.md
3.9 kB
README
5 days ago
adapter_config.json
Safe
679 Bytes
Upload model
5 days ago
adapter_model.safetensors
8.75 MB
xet
Upload model
5 days ago
added_tokens.json
Safe
605 Bytes
Upload tokenizer
5 days ago
merges.txt
Safe
1.67 MB
Upload tokenizer
5 days ago
special_tokens_map.json
Safe
525 Bytes
Upload tokenizer
5 days ago
tokenizer.json
Safe
7.03 MB
Upload tokenizer
5 days ago
tokenizer_config.json
Safe
7.31 kB
Upload tokenizer
5 days ago
vocab.json
Safe
2.78 MB
Upload tokenizer
5 days ago