Solarized-18B-truthy

Solarized-18B-dpo fine-tuned to improve truthfulness.

It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct. Then, we applied DPO over a high-quality preference dataset.

image/png

Downloads last month
35
Safetensors
Model size
18B params
Tensor type
F32
·
F16
·
I8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train vicgalle/solarized-18B-truthy