End of training
Browse files- README.md +9 -68
- adapter_model.bin +1 -1
README.md
CHANGED
|
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 14 |
|
| 15 |
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
|
| 16 |
It achieves the following results on the evaluation set:
|
| 17 |
-
- Loss: 1.
|
| 18 |
|
| 19 |
## Model description
|
| 20 |
|
|
@@ -37,80 +37,21 @@ The following hyperparameters were used during training:
|
|
| 37 |
- train_batch_size: 4
|
| 38 |
- eval_batch_size: 8
|
| 39 |
- seed: 42
|
| 40 |
-
- gradient_accumulation_steps:
|
| 41 |
-
- total_train_batch_size:
|
| 42 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 43 |
- lr_scheduler_type: linear
|
| 44 |
- lr_scheduler_warmup_steps: 100
|
| 45 |
-
- num_epochs:
|
| 46 |
- mixed_precision_training: Native AMP
|
| 47 |
|
| 48 |
### Training results
|
| 49 |
|
| 50 |
-
| Training Loss | Epoch | Step
|
| 51 |
-
|
| 52 |
-
| 1.
|
| 53 |
-
| 1.
|
| 54 |
-
| 1.
|
| 55 |
-
| 1.7207 | 0.19 | 800 | 1.6650 |
|
| 56 |
-
| 1.7043 | 0.24 | 1000 | 1.6505 |
|
| 57 |
-
| 1.6995 | 0.29 | 1200 | 1.6408 |
|
| 58 |
-
| 1.6793 | 0.34 | 1400 | 1.6331 |
|
| 59 |
-
| 1.7217 | 0.38 | 1600 | 1.6269 |
|
| 60 |
-
| 1.707 | 0.43 | 1800 | 1.6221 |
|
| 61 |
-
| 1.6983 | 0.48 | 2000 | 1.6189 |
|
| 62 |
-
| 1.725 | 0.53 | 2200 | 1.6134 |
|
| 63 |
-
| 1.7349 | 0.58 | 2400 | 1.6099 |
|
| 64 |
-
| 1.624 | 0.62 | 2600 | 1.6064 |
|
| 65 |
-
| 1.6283 | 0.67 | 2800 | 1.6018 |
|
| 66 |
-
| 1.6625 | 0.72 | 3000 | 1.6008 |
|
| 67 |
-
| 1.6532 | 0.77 | 3200 | 1.5982 |
|
| 68 |
-
| 1.7053 | 0.82 | 3400 | 1.5963 |
|
| 69 |
-
| 1.6703 | 0.86 | 3600 | 1.5934 |
|
| 70 |
-
| 1.6875 | 0.91 | 3800 | 1.5919 |
|
| 71 |
-
| 1.6388 | 0.96 | 4000 | 1.5887 |
|
| 72 |
-
| 1.6424 | 1.01 | 4200 | 1.5871 |
|
| 73 |
-
| 1.6535 | 1.06 | 4400 | 1.5862 |
|
| 74 |
-
| 1.6391 | 1.11 | 4600 | 1.5843 |
|
| 75 |
-
| 1.6697 | 1.15 | 4800 | 1.5821 |
|
| 76 |
-
| 1.6567 | 1.2 | 5000 | 1.5811 |
|
| 77 |
-
| 1.6041 | 1.25 | 5200 | 1.5798 |
|
| 78 |
-
| 1.6502 | 1.3 | 5400 | 1.5793 |
|
| 79 |
-
| 1.6313 | 1.35 | 5600 | 1.5774 |
|
| 80 |
-
| 1.6462 | 1.39 | 5800 | 1.5766 |
|
| 81 |
-
| 1.7003 | 1.44 | 6000 | 1.5759 |
|
| 82 |
-
| 1.6321 | 1.49 | 6200 | 1.5737 |
|
| 83 |
-
| 1.6881 | 1.54 | 6400 | 1.5733 |
|
| 84 |
-
| 1.6488 | 1.59 | 6600 | 1.5719 |
|
| 85 |
-
| 1.6319 | 1.63 | 6800 | 1.5715 |
|
| 86 |
-
| 1.6912 | 1.68 | 7000 | 1.5711 |
|
| 87 |
-
| 1.6676 | 1.73 | 7200 | 1.5702 |
|
| 88 |
-
| 1.6251 | 1.78 | 7400 | 1.5684 |
|
| 89 |
-
| 1.6524 | 1.83 | 7600 | 1.5687 |
|
| 90 |
-
| 1.5818 | 1.87 | 7800 | 1.5674 |
|
| 91 |
-
| 1.622 | 1.92 | 8000 | 1.5675 |
|
| 92 |
-
| 1.6299 | 1.97 | 8200 | 1.5661 |
|
| 93 |
-
| 1.6377 | 2.02 | 8400 | 1.5663 |
|
| 94 |
-
| 1.6406 | 2.07 | 8600 | 1.5661 |
|
| 95 |
-
| 1.6194 | 2.11 | 8800 | 1.5653 |
|
| 96 |
-
| 1.5876 | 2.16 | 9000 | 1.5647 |
|
| 97 |
-
| 1.6581 | 2.21 | 9200 | 1.5642 |
|
| 98 |
-
| 1.6311 | 2.26 | 9400 | 1.5641 |
|
| 99 |
-
| 1.6238 | 2.31 | 9600 | 1.5635 |
|
| 100 |
-
| 1.609 | 2.35 | 9800 | 1.5635 |
|
| 101 |
-
| 1.6854 | 2.4 | 10000 | 1.5630 |
|
| 102 |
-
| 1.5952 | 2.45 | 10200 | 1.5624 |
|
| 103 |
-
| 1.6017 | 2.5 | 10400 | 1.5618 |
|
| 104 |
-
| 1.6146 | 2.55 | 10600 | 1.5622 |
|
| 105 |
-
| 1.6021 | 2.59 | 10800 | 1.5616 |
|
| 106 |
-
| 1.605 | 2.64 | 11000 | 1.5613 |
|
| 107 |
-
| 1.6237 | 2.69 | 11200 | 1.5609 |
|
| 108 |
-
| 1.6434 | 2.74 | 11400 | 1.5610 |
|
| 109 |
-
| 1.6267 | 2.79 | 11600 | 1.5605 |
|
| 110 |
-
| 1.5984 | 2.84 | 11800 | 1.5606 |
|
| 111 |
-
| 1.6437 | 2.88 | 12000 | 1.5604 |
|
| 112 |
-
| 1.5999 | 2.93 | 12200 | 1.5603 |
|
| 113 |
-
| 1.6131 | 2.98 | 12400 | 1.5601 |
|
| 114 |
|
| 115 |
|
| 116 |
### Framework versions
|
|
|
|
| 14 |
|
| 15 |
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
|
| 16 |
It achieves the following results on the evaluation set:
|
| 17 |
+
- Loss: 1.6604
|
| 18 |
|
| 19 |
## Model description
|
| 20 |
|
|
|
|
| 37 |
- train_batch_size: 4
|
| 38 |
- eval_batch_size: 8
|
| 39 |
- seed: 42
|
| 40 |
+
- gradient_accumulation_steps: 32
|
| 41 |
+
- total_train_batch_size: 128
|
| 42 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 43 |
- lr_scheduler_type: linear
|
| 44 |
- lr_scheduler_warmup_steps: 100
|
| 45 |
+
- num_epochs: 2
|
| 46 |
- mixed_precision_training: Native AMP
|
| 47 |
|
| 48 |
### Training results
|
| 49 |
|
| 50 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
| 51 |
+
|:-------------:|:-----:|:----:|:---------------:|
|
| 52 |
+
| 1.837 | 0.51 | 200 | 1.7257 |
|
| 53 |
+
| 1.7602 | 1.03 | 400 | 1.6777 |
|
| 54 |
+
| 1.7341 | 1.54 | 600 | 1.6604 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
|
| 56 |
|
| 57 |
### Framework versions
|
adapter_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1188025
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7fea90948e544bcdc17692b8ccc1430cb37e9952b1fb30ee0364572522c517db
|
| 3 |
size 1188025
|