Luca-Engel commited on
Commit
90b9a3d
·
verified ·
1 Parent(s): 0750890

Training in progress, epoch 0

Browse files
Files changed (3) hide show
  1. README.md +23 -23
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -6,26 +6,26 @@ tags:
6
  - dpo
7
  - generated_from_trainer
8
  model-index:
9
- - name: gpt2-dpo
10
  results: []
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
- # gpt2-dpo
17
 
18
  This model is a fine-tuned version of [mNLP-project/gpt2-finetuned](https://huggingface.co/mNLP-project/gpt2-finetuned) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 1.4723
21
- - Rewards/chosen: 6.0254
22
- - Rewards/rejected: 4.8392
23
- - Rewards/accuracies: 0.6131
24
- - Rewards/margins: 1.1862
25
- - Logps/rejected: -744.8858
26
- - Logps/chosen: -889.1946
27
- - Logits/rejected: -39.0461
28
- - Logits/chosen: -34.0211
29
 
30
  ## Model description
31
 
@@ -44,14 +44,14 @@ More information needed
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
- - learning_rate: 3e-05
48
  - train_batch_size: 8
49
  - eval_batch_size: 8
50
  - seed: 42
51
  - gradient_accumulation_steps: 2
52
  - total_train_batch_size: 16
53
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
- - lr_scheduler_type: linear
55
  - lr_scheduler_warmup_ratio: 0.1
56
  - num_epochs: 10
57
 
@@ -59,16 +59,16 @@ The following hyperparameters were used during training:
59
 
60
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
61
  |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
62
- | 1.2678 | 1.0 | 1337 | 1.5076 | 4.1438 | 3.2671 | 0.5687 | 0.8767 | -760.6065 | -908.0106 | -41.6142 | -35.6298 |
63
- | 0.804 | 2.0 | 2674 | 1.4723 | 6.0254 | 4.8392 | 0.6131 | 1.1862 | -744.8858 | -889.1946 | -39.0461 | -34.0211 |
64
- | 0.3498 | 3.0 | 4011 | 1.6023 | 7.3240 | 5.7922 | 0.6108 | 1.5318 | -735.3561 | -876.2083 | -39.8915 | -34.1137 |
65
- | 0.2381 | 4.0 | 5348 | 1.8159 | 10.2449 | 8.1301 | 0.6082 | 2.1148 | -711.9767 | -846.9996 | -30.1616 | -23.8379 |
66
- | 0.034 | 5.0 | 6685 | 1.8657 | 7.7458 | 5.8712 | 0.6067 | 1.8746 | -734.5658 | -871.9908 | -42.3683 | -36.2290 |
67
- | 0.1025 | 6.0 | 8022 | 1.9185 | 6.2157 | 4.4010 | 0.5869 | 1.8146 | -749.2675 | -887.2919 | -39.4627 | -32.6924 |
68
- | 0.0094 | 7.0 | 9359 | 2.0228 | 5.9773 | 4.0485 | 0.5907 | 1.9288 | -752.7930 | -889.6758 | -40.0105 | -32.9146 |
69
- | 0.0176 | 8.0 | 10696 | 2.1184 | 7.0049 | 4.8408 | 0.5914 | 2.1641 | -744.8698 | -879.3992 | -42.7145 | -35.6132 |
70
- | 0.0089 | 9.0 | 12033 | 2.1585 | 6.4084 | 4.2552 | 0.5899 | 2.1531 | -750.7258 | -885.3651 | -42.3970 | -35.1863 |
71
- | 0.0044 | 10.0 | 13370 | 2.1897 | 6.2187 | 4.0595 | 0.5877 | 2.1592 | -752.6830 | -887.2613 | -42.5334 | -35.2975 |
72
 
73
 
74
  ### Framework versions
 
6
  - dpo
7
  - generated_from_trainer
8
  model-index:
9
+ - name: gpt2-dpo-with-cosine-lr-scheduler
10
  results: []
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
+ # gpt2-dpo-with-cosine-lr-scheduler
17
 
18
  This model is a fine-tuned version of [mNLP-project/gpt2-finetuned](https://huggingface.co/mNLP-project/gpt2-finetuned) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.1168
21
+ - Rewards/chosen: 3.8849
22
+ - Rewards/rejected: 3.2031
23
+ - Rewards/accuracies: 0.5892
24
+ - Rewards/margins: 0.6818
25
+ - Logps/rejected: -761.2470
26
+ - Logps/chosen: -910.5992
27
+ - Logits/rejected: -36.5651
28
+ - Logits/chosen: -30.3810
29
 
30
  ## Model description
31
 
 
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
+ - learning_rate: 1e-05
48
  - train_batch_size: 8
49
  - eval_batch_size: 8
50
  - seed: 42
51
  - gradient_accumulation_steps: 2
52
  - total_train_batch_size: 16
53
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
+ - lr_scheduler_type: cosine
55
  - lr_scheduler_warmup_ratio: 0.1
56
  - num_epochs: 10
57
 
 
59
 
60
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
61
  |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
62
+ | 0.9846 | 1.0 | 1337 | 1.1168 | 3.8849 | 3.2031 | 0.5892 | 0.6818 | -761.2470 | -910.5992 | -36.5651 | -30.3810 |
63
+ | 0.6025 | 2.0 | 2674 | 1.1405 | 5.0060 | 4.0992 | 0.6175 | 0.9068 | -752.2864 | -899.3887 | -35.0528 | -28.9839 |
64
+ | 0.2464 | 3.0 | 4011 | 1.1202 | 4.6754 | 3.6835 | 0.6160 | 0.9919 | -756.4427 | -902.6943 | -39.6513 | -33.3219 |
65
+ | 0.1182 | 4.0 | 5348 | 1.3054 | 7.3114 | 5.8367 | 0.6131 | 1.4747 | -734.9108 | -876.3349 | -35.1974 | -28.6005 |
66
+ | 0.0669 | 5.0 | 6685 | 1.3846 | 6.5378 | 5.0738 | 0.6093 | 1.4640 | -742.5399 | -884.0710 | -39.0355 | -31.8814 |
67
+ | 0.0226 | 6.0 | 8022 | 1.4662 | 6.2901 | 4.6812 | 0.6052 | 1.6089 | -746.4659 | -886.5475 | -40.3811 | -32.9593 |
68
+ | 0.0128 | 7.0 | 9359 | 1.5557 | 5.8081 | 4.1554 | 0.6108 | 1.6527 | -751.7241 | -891.3676 | -39.1744 | -31.2704 |
69
+ | 0.019 | 8.0 | 10696 | 1.6676 | 5.5428 | 3.8458 | 0.6011 | 1.6970 | -754.8205 | -894.0207 | -40.5161 | -32.4700 |
70
+ | 0.0101 | 9.0 | 12033 | 1.7100 | 5.5531 | 3.8215 | 0.6022 | 1.7315 | -755.0627 | -893.9178 | -40.7171 | -32.5929 |
71
+ | 0.0053 | 10.0 | 13370 | 1.7177 | 5.4221 | 3.7030 | 0.6000 | 1.7191 | -756.2481 | -895.2274 | -40.8064 | -32.6689 |
72
 
73
 
74
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:999885e52ec8040b3c1a078e8e7aa71469f9a66111a5053a04be273e74492e2d
3
  size 497774208
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ef27d8cae5a67ccd7d8d3e0727dab7ae405ec179914db0c9054695a90af4a78
3
  size 497774208
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5e9fe933f136ec8b8d9bd5296ef093520541ada90df3f9fead74fec43b3c51e9
3
  size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bac11c22c2d9c7ffea69e45b04ef286d1c79bca8bf6e705dbce85d7a2c13ca2e
3
  size 4984