Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
dfrees
/
llama-1b-instruct-dpo
like
0
PEFT
Safetensors
llama
4-bit precision
bitsandbytes
arxiv:
1910.09700
Model card
Files
Files and versions
xet
Community
Use this model
main
llama-1b-instruct-dpo
1.57 GB
1 contributor
History:
6 commits
dfrees
Uploading merged DPO-trained model
11e04b0
verified
over 1 year ago
.gitattributes
1.57 kB
Uploading DPO-trained model
over 1 year ago
README.md
5.11 kB
Uploading merged DPO-trained model
over 1 year ago
adapter_config.json
656 Bytes
Uploading merged DPO-trained model
over 1 year ago
adapter_model.safetensors
3.42 MB
xet
Uploading merged DPO-trained model
over 1 year ago
config.json
1.41 kB
Uploading merged DPO-trained model
over 1 year ago
generation_config.json
184 Bytes
Uploading merged DPO-trained model
over 1 year ago
model.safetensors
1.56 GB
xet
Uploading merged DPO-trained model
over 1 year ago
special_tokens_map.json
325 Bytes
Uploading merged DPO-trained model
over 1 year ago
tokenizer.json
9.09 MB
xet
Uploading merged DPO-trained model
over 1 year ago
tokenizer_config.json
54.6 kB
Uploading merged DPO-trained model
over 1 year ago
training_args.bin
5.11 kB
xet
Uploading merged DPO-trained model
over 1 year ago