distilgpt2-dpo
This model is a fine-tuned version of mNLP-project/distilgpt2-finetuned on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.1792
- Rewards/chosen: 6.1190
- Rewards/rejected: 5.0796
- Rewards/accuracies: 0.6061
- Rewards/margins: 1.0394
- Logps/rejected: -703.7405
- Logps/chosen: -844.3468
- Logits/rejected: -11.5397
- Logits/chosen: -8.7315
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1.2146 | 1.0 | 1337 | 1.1792 | 6.1190 | 5.0796 | 0.6061 | 1.0394 | -703.7405 | -844.3468 | -11.5397 | -8.7315 |
| 0.8026 | 2.0 | 2674 | 1.1980 | 6.3028 | 5.0877 | 0.6142 | 1.2151 | -703.6594 | -842.5087 | -9.1682 | -7.1950 |
| 0.3605 | 3.0 | 4011 | 1.3136 | 5.3889 | 4.2456 | 0.5960 | 1.1433 | -712.0801 | -851.6475 | -8.0251 | -5.8074 |
| 0.117 | 4.0 | 5348 | 1.4214 | 6.6526 | 5.0410 | 0.6134 | 1.6116 | -704.1267 | -839.0112 | -6.1296 | -4.2746 |
| 0.0663 | 5.0 | 6685 | 1.5485 | 5.0321 | 3.6157 | 0.5947 | 1.4164 | -718.3795 | -855.2162 | -2.6173 | -0.7400 |
| 0.0078 | 6.0 | 8022 | 1.7565 | 5.1090 | 3.1954 | 0.6059 | 1.9136 | -722.5821 | -854.4468 | -4.4487 | -2.6082 |
| 0.0095 | 7.0 | 9359 | 1.7638 | 4.7802 | 2.8888 | 0.6043 | 1.8913 | -725.6480 | -857.7352 | -3.9409 | -2.1229 |
| 0.0178 | 8.0 | 10696 | 1.9119 | 3.9489 | 1.9819 | 0.5990 | 1.9669 | -734.7172 | -866.0483 | -4.2940 | -2.5345 |
| 0.0089 | 9.0 | 12033 | 1.9710 | 3.7315 | 1.6704 | 0.5966 | 2.0611 | -737.8326 | -868.2217 | -5.5045 | -3.8933 |
| 0.0046 | 10.0 | 13370 | 2.0149 | 3.5136 | 1.4530 | 0.5940 | 2.0606 | -740.0063 | -870.4007 | -5.9962 | -4.4521 |
Framework versions
- Transformers 4.40.2
- Pytorch 2.1.0+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- -
Model tree for mNLP-project/distilgpt2-dpo
Base model
distilbert/distilgpt2
Finetuned
mNLP-project/distilgpt2-finetuned