ninagroot commited on
Commit
4ba62bd
·
verified ·
1 Parent(s): 518ccef

ninagroot/GPT2-705M

Browse files
README.md CHANGED
@@ -13,7 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
- - Loss: 3.7710
17
 
18
  ## Model description
19
 
@@ -40,54 +40,34 @@ The following hyperparameters were used during training:
40
  - total_train_batch_size: 128
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: cosine
43
- - lr_scheduler_warmup_steps: 300
44
- - num_epochs: 40
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
- | 8.1532 | 1.0 | 3 | 7.4554 |
52
- | 6.3073 | 2.0 | 6 | 6.3638 |
53
- | 5.7181 | 3.0 | 9 | 5.8153 |
54
- | 4.959 | 4.0 | 12 | 5.3005 |
55
- | 4.6181 | 5.0 | 15 | 5.1042 |
56
- | 4.333 | 6.0 | 18 | 4.8877 |
57
- | 4.1506 | 7.0 | 21 | 4.6619 |
58
- | 4.0683 | 8.0 | 24 | 4.3450 |
59
- | 3.9754 | 9.0 | 27 | 4.1621 |
60
- | 3.6008 | 10.0 | 30 | 3.9766 |
61
- | 3.3515 | 11.0 | 33 | 3.9028 |
62
- | 3.3668 | 12.0 | 36 | 3.8544 |
63
- | 3.0457 | 13.0 | 39 | 3.7981 |
64
- | 3.1136 | 14.0 | 42 | 3.8569 |
65
- | 2.7678 | 15.0 | 45 | 3.6741 |
66
- | 2.7723 | 16.0 | 48 | 3.5930 |
67
- | 2.5602 | 17.0 | 51 | 3.6563 |
68
- | 2.3488 | 18.0 | 54 | 3.6320 |
69
- | 2.2859 | 19.0 | 57 | 3.6814 |
70
- | 2.281 | 20.0 | 60 | 3.5811 |
71
- | 2.1403 | 21.0 | 63 | 3.7132 |
72
- | 1.9785 | 22.0 | 66 | 3.6227 |
73
- | 1.8955 | 23.0 | 69 | 3.7339 |
74
- | 1.7844 | 24.0 | 72 | 3.6224 |
75
- | 1.6295 | 25.0 | 75 | 3.7343 |
76
- | 1.4971 | 26.0 | 78 | 3.5833 |
77
- | 1.4807 | 27.0 | 81 | 3.6600 |
78
- | 1.4306 | 28.0 | 84 | 3.7927 |
79
- | 1.4044 | 29.0 | 87 | 3.7263 |
80
- | 1.3992 | 30.0 | 90 | 3.7196 |
81
- | 1.2835 | 31.0 | 93 | 3.8052 |
82
- | 1.4999 | 32.0 | 96 | 3.6814 |
83
- | 1.3127 | 33.0 | 99 | 3.8033 |
84
- | 1.2991 | 34.0 | 102 | 3.8336 |
85
- | 1.4763 | 35.0 | 105 | 3.7429 |
86
- | 1.3821 | 36.0 | 108 | 3.7676 |
87
- | 1.4559 | 37.0 | 111 | 3.7821 |
88
- | 1.4438 | 38.0 | 114 | 3.7221 |
89
- | 1.5194 | 39.0 | 117 | 3.7827 |
90
- | 1.3124 | 40.0 | 120 | 3.7710 |
91
 
92
 
93
  ### Framework versions
 
13
 
14
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
+ - Loss: 3.3576
17
 
18
  ## Model description
19
 
 
40
  - total_train_batch_size: 128
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: cosine
43
+ - lr_scheduler_warmup_steps: 50
44
+ - num_epochs: 20
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
+ | 6.8456 | 1.0 | 3 | 6.8664 |
52
+ | 6.2888 | 2.0 | 6 | 6.3933 |
53
+ | 5.8896 | 3.0 | 9 | 5.9888 |
54
+ | 5.0916 | 4.0 | 12 | 6.6163 |
55
+ | 4.962 | 5.0 | 15 | 5.1946 |
56
+ | 4.5881 | 6.0 | 18 | 6.2555 |
57
+ | 4.4261 | 7.0 | 21 | 4.7378 |
58
+ | 4.4532 | 8.0 | 24 | 4.7103 |
59
+ | 4.3861 | 9.0 | 27 | 4.4524 |
60
+ | 4.0555 | 10.0 | 30 | 4.2791 |
61
+ | 3.7985 | 11.0 | 33 | 4.0483 |
62
+ | 3.7137 | 12.0 | 36 | 3.8826 |
63
+ | 3.3097 | 13.0 | 39 | 3.8406 |
64
+ | 3.4086 | 14.0 | 42 | 3.6873 |
65
+ | 3.1651 | 15.0 | 45 | 3.6276 |
66
+ | 3.0498 | 16.0 | 48 | 3.5596 |
67
+ | 2.9617 | 17.0 | 51 | 3.5363 |
68
+ | 2.7265 | 18.0 | 54 | 3.3944 |
69
+ | 2.6125 | 19.0 | 57 | 3.3734 |
70
+ | 2.7022 | 20.0 | 60 | 3.3576 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
 
73
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf128dee64978c1869bf74168d517a1681424024769093977c5d3b53679c58b6
3
  size 2748401440
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a1deaf1eaa32f81f7b46fac2a160fc022c66527997c13795fef50b902a4f881
3
  size 2748401440
runs/Apr29_16-47-26_gcn21.local.snellius.surf.nl/events.out.tfevents.1714402055.gcn21.local.snellius.surf.nl.638712.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19b4bae187bd9c536a318987d72b079a98df0bb56bd1c1769914b6909a14e215
3
+ size 22766
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bd9d4961827ec85cd42753e799ece6d7f4194df377c6fe4983714cf7e4be75fe
3
  size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f5506bfcb4f5fde615f3a6b791e0047f78b8dda9bba7d65e079d5b41bd0076b
3
  size 4984