GRATH: Gradual Self-Truthifying for Large Language Models
Paper
•
2401.12292
•
Published
•
2
This is a gradually self-truthified model (with 9 iterations) proposed in the paper GRATH: Gradual Self-Truthifying for Large Language Models.
Note: This model is applied with DPO ten times. The reference model of DPO is set as the pretrained base model to avoid the overfitting problem.
The following bitsandbytes quantization config was used during training: