ninagroot commited on
Commit
fccc788
·
verified ·
1 Parent(s): e02e496

ninagroot/GPT2-705Mtest

Browse files
README.md CHANGED
@@ -13,7 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
- - Loss: 6.8497
17
 
18
  ## Model description
19
 
@@ -33,11 +33,11 @@ More information needed
33
 
34
  The following hyperparameters were used during training:
35
  - learning_rate: 0.00025
36
- - train_batch_size: 8
37
  - eval_batch_size: 8
38
  - seed: 42
39
  - gradient_accumulation_steps: 4
40
- - total_train_batch_size: 32
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: cosine
43
  - lr_scheduler_warmup_steps: 50
@@ -48,46 +48,29 @@ The following hyperparameters were used during training:
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
- | 7.607 | 1.0 | 7 | 7.9535 |
52
- | 6.4914 | 2.0 | 14 | 7.1690 |
53
- | 6.0264 | 3.0 | 21 | 6.4225 |
54
- | 4.9537 | 4.0 | 28 | 6.0582 |
55
- | 4.6624 | 5.0 | 35 | 5.6295 |
56
- | 4.1858 | 6.0 | 42 | 5.4364 |
57
- | 3.4042 | 7.0 | 49 | 5.6539 |
58
- | 3.4375 | 8.0 | 56 | 5.3934 |
59
- | 3.1425 | 9.0 | 63 | 5.3686 |
60
- | 3.0208 | 10.0 | 70 | 5.4510 |
61
- | 2.855 | 11.0 | 77 | 5.6289 |
62
- | 2.5067 | 12.0 | 84 | 5.7600 |
63
- | 2.369 | 13.0 | 91 | 5.8043 |
64
- | 2.2087 | 14.0 | 98 | 5.9449 |
65
- | 1.9651 | 15.0 | 105 | 6.0183 |
66
- | 1.8533 | 16.0 | 112 | 6.1303 |
67
- | 1.5668 | 17.0 | 119 | 6.1822 |
68
- | 1.2826 | 18.0 | 126 | 6.2579 |
69
- | 1.0517 | 19.0 | 133 | 6.3620 |
70
- | 0.8265 | 20.0 | 140 | 6.4218 |
71
- | 0.5489 | 21.0 | 147 | 6.4343 |
72
- | 0.3733 | 22.0 | 154 | 6.4700 |
73
- | 0.2322 | 23.0 | 161 | 6.5601 |
74
- | 0.15 | 24.0 | 168 | 6.5968 |
75
- | 0.1128 | 25.0 | 175 | 6.6768 |
76
- | 0.0703 | 26.0 | 182 | 6.7425 |
77
- | 0.0618 | 27.0 | 189 | 6.7583 |
78
- | 0.0403 | 28.0 | 196 | 6.7516 |
79
- | 0.0273 | 29.0 | 203 | 6.8169 |
80
- | 0.0227 | 30.0 | 210 | 6.8227 |
81
- | 0.0178 | 31.0 | 217 | 6.8049 |
82
- | 0.0131 | 32.0 | 224 | 6.8238 |
83
- | 0.0113 | 33.0 | 231 | 6.8419 |
84
- | 0.0126 | 34.0 | 238 | 6.8478 |
85
- | 0.0121 | 35.0 | 245 | 6.8468 |
86
- | 0.0103 | 36.0 | 252 | 6.8474 |
87
- | 0.0105 | 37.0 | 259 | 6.8487 |
88
- | 0.008 | 38.0 | 266 | 6.8494 |
89
- | 0.0118 | 39.0 | 273 | 6.8498 |
90
- | 0.0079 | 40.0 | 280 | 6.8497 |
91
 
92
 
93
  ### Framework versions
 
13
 
14
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
+ - Loss: 5.3613
17
 
18
  ## Model description
19
 
 
33
 
34
  The following hyperparameters were used during training:
35
  - learning_rate: 0.00025
36
+ - train_batch_size: 32
37
  - eval_batch_size: 8
38
  - seed: 42
39
  - gradient_accumulation_steps: 4
40
+ - total_train_batch_size: 128
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: cosine
43
  - lr_scheduler_warmup_steps: 50
 
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
+ | 9.7393 | 0.57 | 1 | 9.7595 |
52
+ | 7.9692 | 1.71 | 3 | 8.8893 |
53
+ | 7.9553 | 2.86 | 5 | 8.2940 |
54
+ | 8.7602 | 4.0 | 7 | 8.8014 |
55
+ | 8.2186 | 4.57 | 8 | 7.8900 |
56
+ | 7.1745 | 5.71 | 10 | 7.5816 |
57
+ | 7.1837 | 6.86 | 12 | 7.3890 |
58
+ | 6.5593 | 8.0 | 14 | 7.1178 |
59
+ | 6.387 | 8.57 | 15 | 8.4858 |
60
+ | 6.4743 | 9.71 | 17 | 6.9945 |
61
+ | 6.1188 | 10.86 | 19 | 6.8243 |
62
+ | 5.9195 | 12.0 | 21 | 6.5761 |
63
+ | 5.7847 | 12.57 | 22 | 6.4606 |
64
+ | 5.4622 | 13.71 | 24 | 6.2584 |
65
+ | 5.2573 | 14.86 | 26 | 6.1843 |
66
+ | 5.0353 | 16.0 | 28 | 5.9988 |
67
+ | 4.8916 | 16.57 | 29 | 5.9437 |
68
+ | 4.6798 | 17.71 | 31 | 5.8515 |
69
+ | 4.6879 | 18.86 | 33 | 5.6935 |
70
+ | 4.3026 | 20.0 | 35 | 5.6336 |
71
+ | 4.2853 | 20.57 | 36 | 5.5061 |
72
+ | 4.0243 | 21.71 | 38 | 5.4732 |
73
+ | 3.819 | 22.86 | 40 | 5.3613 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
 
75
 
76
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1e15cffb20ce445937d71d9af0ce6de30ee80d67d8e3c545970c6ffee03beb8e
3
  size 2796386080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d021d2d8676458c20960a472cdf7297e746755aabd78d27641db72c358c8a3a
3
  size 2796386080
runs/Apr17_11-41-39_gcn42.local.snellius.surf.nl/events.out.tfevents.1713346911.gcn42.local.snellius.surf.nl.2981629.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:615c479541aa8d424bc6fe838b8e0213e7a63fb03318d903becc0c6cc5c60d2d
3
+ size 19424
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b14ae6a4eb91ae7a48f011bea7ab8fd663f6f33af4cf501f15323656b828c040
3
  size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed32b3d5b0def93391e72339d891569ab53ae8dd8b365dbfdf72094aee89a01c
3
  size 4984