| license: other | |
| tags: | |
| - dpo | |
| - moe | |
| this is a DPO fine-tuned MoE model with about 19B parameter. | |
| ``` | |
| DPO Trainer | |
| TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023. | |
| ``` | |