ninagroot/GPT2-705Mtest
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 13 |
|
| 14 |
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
|
| 15 |
It achieves the following results on the evaluation set:
|
| 16 |
-
- Loss:
|
| 17 |
|
| 18 |
## Model description
|
| 19 |
|
|
@@ -33,11 +33,11 @@ More information needed
|
|
| 33 |
|
| 34 |
The following hyperparameters were used during training:
|
| 35 |
- learning_rate: 0.00025
|
| 36 |
-
- train_batch_size:
|
| 37 |
- eval_batch_size: 8
|
| 38 |
- seed: 42
|
| 39 |
- gradient_accumulation_steps: 4
|
| 40 |
-
- total_train_batch_size:
|
| 41 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 42 |
- lr_scheduler_type: cosine
|
| 43 |
- lr_scheduler_warmup_steps: 50
|
|
@@ -48,29 +48,46 @@ The following hyperparameters were used during training:
|
|
| 48 |
|
| 49 |
| Training Loss | Epoch | Step | Validation Loss |
|
| 50 |
|:-------------:|:-----:|:----:|:---------------:|
|
| 51 |
-
|
|
| 52 |
-
|
|
| 53 |
-
|
|
| 54 |
-
|
|
| 55 |
-
|
|
| 56 |
-
|
|
| 57 |
-
|
|
| 58 |
-
|
|
| 59 |
-
|
|
| 60 |
-
|
|
| 61 |
-
|
|
| 62 |
-
|
|
| 63 |
-
|
|
| 64 |
-
|
|
| 65 |
-
|
|
| 66 |
-
|
|
| 67 |
-
|
|
| 68 |
-
|
|
| 69 |
-
|
|
| 70 |
-
|
|
| 71 |
-
|
|
| 72 |
-
|
|
| 73 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
|
| 76 |
### Framework versions
|
|
|
|
| 13 |
|
| 14 |
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
|
| 15 |
It achieves the following results on the evaluation set:
|
| 16 |
+
- Loss: 6.8497
|
| 17 |
|
| 18 |
## Model description
|
| 19 |
|
|
|
|
| 33 |
|
| 34 |
The following hyperparameters were used during training:
|
| 35 |
- learning_rate: 0.00025
|
| 36 |
+
- train_batch_size: 8
|
| 37 |
- eval_batch_size: 8
|
| 38 |
- seed: 42
|
| 39 |
- gradient_accumulation_steps: 4
|
| 40 |
+
- total_train_batch_size: 32
|
| 41 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 42 |
- lr_scheduler_type: cosine
|
| 43 |
- lr_scheduler_warmup_steps: 50
|
|
|
|
| 48 |
|
| 49 |
| Training Loss | Epoch | Step | Validation Loss |
|
| 50 |
|:-------------:|:-----:|:----:|:---------------:|
|
| 51 |
+
| 7.607 | 1.0 | 7 | 7.9535 |
|
| 52 |
+
| 6.4914 | 2.0 | 14 | 7.1690 |
|
| 53 |
+
| 6.0264 | 3.0 | 21 | 6.4225 |
|
| 54 |
+
| 4.9537 | 4.0 | 28 | 6.0582 |
|
| 55 |
+
| 4.6624 | 5.0 | 35 | 5.6295 |
|
| 56 |
+
| 4.1858 | 6.0 | 42 | 5.4364 |
|
| 57 |
+
| 3.4042 | 7.0 | 49 | 5.6539 |
|
| 58 |
+
| 3.4375 | 8.0 | 56 | 5.3934 |
|
| 59 |
+
| 3.1425 | 9.0 | 63 | 5.3686 |
|
| 60 |
+
| 3.0208 | 10.0 | 70 | 5.4510 |
|
| 61 |
+
| 2.855 | 11.0 | 77 | 5.6289 |
|
| 62 |
+
| 2.5067 | 12.0 | 84 | 5.7600 |
|
| 63 |
+
| 2.369 | 13.0 | 91 | 5.8043 |
|
| 64 |
+
| 2.2087 | 14.0 | 98 | 5.9449 |
|
| 65 |
+
| 1.9651 | 15.0 | 105 | 6.0183 |
|
| 66 |
+
| 1.8533 | 16.0 | 112 | 6.1303 |
|
| 67 |
+
| 1.5668 | 17.0 | 119 | 6.1822 |
|
| 68 |
+
| 1.2826 | 18.0 | 126 | 6.2579 |
|
| 69 |
+
| 1.0517 | 19.0 | 133 | 6.3620 |
|
| 70 |
+
| 0.8265 | 20.0 | 140 | 6.4218 |
|
| 71 |
+
| 0.5489 | 21.0 | 147 | 6.4343 |
|
| 72 |
+
| 0.3733 | 22.0 | 154 | 6.4700 |
|
| 73 |
+
| 0.2322 | 23.0 | 161 | 6.5601 |
|
| 74 |
+
| 0.15 | 24.0 | 168 | 6.5968 |
|
| 75 |
+
| 0.1128 | 25.0 | 175 | 6.6768 |
|
| 76 |
+
| 0.0703 | 26.0 | 182 | 6.7425 |
|
| 77 |
+
| 0.0618 | 27.0 | 189 | 6.7583 |
|
| 78 |
+
| 0.0403 | 28.0 | 196 | 6.7516 |
|
| 79 |
+
| 0.0273 | 29.0 | 203 | 6.8169 |
|
| 80 |
+
| 0.0227 | 30.0 | 210 | 6.8227 |
|
| 81 |
+
| 0.0178 | 31.0 | 217 | 6.8049 |
|
| 82 |
+
| 0.0131 | 32.0 | 224 | 6.8238 |
|
| 83 |
+
| 0.0113 | 33.0 | 231 | 6.8419 |
|
| 84 |
+
| 0.0126 | 34.0 | 238 | 6.8478 |
|
| 85 |
+
| 0.0121 | 35.0 | 245 | 6.8468 |
|
| 86 |
+
| 0.0103 | 36.0 | 252 | 6.8474 |
|
| 87 |
+
| 0.0105 | 37.0 | 259 | 6.8487 |
|
| 88 |
+
| 0.008 | 38.0 | 266 | 6.8494 |
|
| 89 |
+
| 0.0118 | 39.0 | 273 | 6.8498 |
|
| 90 |
+
| 0.0079 | 40.0 | 280 | 6.8497 |
|
| 91 |
|
| 92 |
|
| 93 |
### Framework versions
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 2796386080
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1e15cffb20ce445937d71d9af0ce6de30ee80d67d8e3c545970c6ffee03beb8e
|
| 3 |
size 2796386080
|
runs/Apr17_11-29-53_gcn72.local.snellius.surf.nl/events.out.tfevents.1713346201.gcn72.local.snellius.surf.nl.3351341.0
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5407f7d6ea4e696e32c68f04493e90988558acc97a868428019a0213bf19dc49
|
| 3 |
+
size 74353
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4984
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b14ae6a4eb91ae7a48f011bea7ab8fd663f6f33af4cf501f15323656b828c040
|
| 3 |
size 4984
|