do test run on scitas with ref_model
Browse files- README.md +27 -27
- model.safetensors +1 -1
README.md
CHANGED
|
@@ -6,26 +6,26 @@ tags:
|
|
| 6 |
- dpo
|
| 7 |
- generated_from_trainer
|
| 8 |
model-index:
|
| 9 |
-
- name: gpt2-dpo
|
| 10 |
results: []
|
| 11 |
---
|
| 12 |
|
| 13 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 14 |
should probably proofread and complete it, then remove this comment. -->
|
| 15 |
|
| 16 |
-
# gpt2-dpo
|
| 17 |
|
| 18 |
This model is a fine-tuned version of [mNLP-project/gpt2-finetuned](https://huggingface.co/mNLP-project/gpt2-finetuned) on the None dataset.
|
| 19 |
It achieves the following results on the evaluation set:
|
| 20 |
-
- Loss:
|
| 21 |
-
- Rewards/chosen:
|
| 22 |
-
- Rewards/rejected:
|
| 23 |
-
- Rewards/accuracies: 0.
|
| 24 |
-
- Rewards/margins: 0.
|
| 25 |
-
- Logps/rejected: -
|
| 26 |
-
- Logps/chosen: -
|
| 27 |
-
- Logits/rejected: -
|
| 28 |
-
- Logits/chosen: -
|
| 29 |
|
| 30 |
## Model description
|
| 31 |
|
|
@@ -44,31 +44,31 @@ More information needed
|
|
| 44 |
### Training hyperparameters
|
| 45 |
|
| 46 |
The following hyperparameters were used during training:
|
| 47 |
-
- learning_rate: 1e-
|
| 48 |
- train_batch_size: 8
|
| 49 |
- eval_batch_size: 8
|
| 50 |
- seed: 42
|
| 51 |
-
- gradient_accumulation_steps:
|
| 52 |
-
- total_train_batch_size:
|
| 53 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 54 |
- lr_scheduler_type: cosine
|
| 55 |
-
- lr_scheduler_warmup_ratio: 0.
|
| 56 |
- num_epochs: 10
|
| 57 |
|
| 58 |
### Training results
|
| 59 |
|
| 60 |
-
| Training Loss | Epoch
|
| 61 |
-
|:-------------:|:-----:|:----
|
| 62 |
-
| 0.
|
| 63 |
-
| 0.
|
| 64 |
-
| 0.
|
| 65 |
-
| 0.
|
| 66 |
-
| 0.
|
| 67 |
-
| 0.
|
| 68 |
-
| 0.
|
| 69 |
-
| 0.
|
| 70 |
-
| 0.
|
| 71 |
-
| 0.
|
| 72 |
|
| 73 |
|
| 74 |
### Framework versions
|
|
|
|
| 6 |
- dpo
|
| 7 |
- generated_from_trainer
|
| 8 |
model-index:
|
| 9 |
+
- name: gpt2-dpo
|
| 10 |
results: []
|
| 11 |
---
|
| 12 |
|
| 13 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 14 |
should probably proofread and complete it, then remove this comment. -->
|
| 15 |
|
| 16 |
+
# gpt2-dpo
|
| 17 |
|
| 18 |
This model is a fine-tuned version of [mNLP-project/gpt2-finetuned](https://huggingface.co/mNLP-project/gpt2-finetuned) on the None dataset.
|
| 19 |
It achieves the following results on the evaluation set:
|
| 20 |
+
- Loss: 0.6350
|
| 21 |
+
- Rewards/chosen: 1.6222
|
| 22 |
+
- Rewards/rejected: 1.3204
|
| 23 |
+
- Rewards/accuracies: 0.6496
|
| 24 |
+
- Rewards/margins: 0.3018
|
| 25 |
+
- Logps/rejected: -780.0735
|
| 26 |
+
- Logps/chosen: -933.2262
|
| 27 |
+
- Logits/rejected: -34.5449
|
| 28 |
+
- Logits/chosen: -28.7838
|
| 29 |
|
| 30 |
## Model description
|
| 31 |
|
|
|
|
| 44 |
### Training hyperparameters
|
| 45 |
|
| 46 |
The following hyperparameters were used during training:
|
| 47 |
+
- learning_rate: 1e-06
|
| 48 |
- train_batch_size: 8
|
| 49 |
- eval_batch_size: 8
|
| 50 |
- seed: 42
|
| 51 |
+
- gradient_accumulation_steps: 4
|
| 52 |
+
- total_train_batch_size: 32
|
| 53 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 54 |
- lr_scheduler_type: cosine
|
| 55 |
+
- lr_scheduler_warmup_ratio: 0.2
|
| 56 |
- num_epochs: 10
|
| 57 |
|
| 58 |
### Training results
|
| 59 |
|
| 60 |
+
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|
| 61 |
+
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
|
| 62 |
+
| 0.6286 | 0.9993 | 668 | 0.6350 | 1.6222 | 1.3204 | 0.6496 | 0.3018 | -780.0735 | -933.2262 | -34.5449 | -28.7838 |
|
| 63 |
+
| 0.6387 | 2.0 | 1337 | 0.6662 | 1.8546 | 1.5416 | 0.6302 | 0.3130 | -777.8622 | -930.9024 | -34.5110 | -28.7424 |
|
| 64 |
+
| 0.5643 | 2.9993 | 2005 | 0.6635 | 2.0534 | 1.6918 | 0.6396 | 0.3616 | -776.3599 | -928.9147 | -34.5066 | -28.7168 |
|
| 65 |
+
| 0.4487 | 4.0 | 2674 | 0.6677 | 2.2748 | 1.8809 | 0.6451 | 0.3940 | -774.4694 | -926.7002 | -34.1409 | -28.2530 |
|
| 66 |
+
| 0.3831 | 4.9993 | 3342 | 0.6783 | 2.4765 | 2.0527 | 0.6418 | 0.4238 | -772.7513 | -924.6838 | -34.0051 | -28.0668 |
|
| 67 |
+
| 0.352 | 6.0 | 4011 | 0.6782 | 2.4441 | 2.0097 | 0.6440 | 0.4344 | -773.1808 | -925.0074 | -34.0868 | -28.1418 |
|
| 68 |
+
| 0.3189 | 6.9993 | 4679 | 0.6840 | 2.2310 | 1.8303 | 0.6343 | 0.4008 | -774.9752 | -927.1384 | -33.9525 | -27.9466 |
|
| 69 |
+
| 0.3006 | 8.0 | 5348 | 0.6882 | 2.4339 | 1.9918 | 0.6388 | 0.4422 | -773.3604 | -925.1093 | -33.7716 | -27.7551 |
|
| 70 |
+
| 0.3152 | 8.9993 | 6016 | 0.6891 | 2.4920 | 2.0457 | 0.6407 | 0.4462 | -772.8206 | -924.5289 | -33.6753 | -27.6463 |
|
| 71 |
+
| 0.2752 | 9.9925 | 6680 | 0.6892 | 2.4562 | 2.0151 | 0.6410 | 0.4411 | -773.1274 | -924.8871 | -33.6818 | -27.6538 |
|
| 72 |
|
| 73 |
|
| 74 |
### Framework versions
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 497774208
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4ef27d8cae5a67ccd7d8d3e0727dab7ae405ec179914db0c9054695a90af4a78
|
| 3 |
size 497774208
|