thenlpresearcher commited on
Commit
f74c1d0
·
verified ·
1 Parent(s): 96ebac0

thenlpresearcher/microsoft_mpnet_punct_model

Browse files
README.md CHANGED
@@ -19,10 +19,10 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.1893
23
- - F1: 0.7606
24
- - Precision: 0.7932
25
- - Recall: 0.7307
26
 
27
  ## Model description
28
 
@@ -47,56 +47,43 @@ The following hyperparameters were used during training:
47
  - seed: 42
48
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
  - lr_scheduler_type: linear
50
- - num_epochs: 3
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
56
  |:-------------:|:------:|:-----:|:---------------:|:------:|:---------:|:------:|
57
- | 0.6171 | 0.0776 | 500 | 0.3970 | 0.5982 | 0.7526 | 0.4964 |
58
- | 0.3326 | 0.1553 | 1000 | 0.3275 | 0.6379 | 0.7121 | 0.5777 |
59
- | 0.3087 | 0.2329 | 1500 | 0.2986 | 0.6567 | 0.7247 | 0.6004 |
60
- | 0.2741 | 0.3105 | 2000 | 0.2799 | 0.6754 | 0.7111 | 0.6431 |
61
- | 0.2695 | 0.3881 | 2500 | 0.2583 | 0.6925 | 0.7634 | 0.6336 |
62
- | 0.2562 | 0.4658 | 3000 | 0.2514 | 0.6953 | 0.7361 | 0.6588 |
63
- | 0.2465 | 0.5434 | 3500 | 0.2441 | 0.7045 | 0.7848 | 0.6391 |
64
- | 0.2413 | 0.6210 | 4000 | 0.2390 | 0.7105 | 0.7590 | 0.6679 |
65
- | 0.236 | 0.6986 | 4500 | 0.2331 | 0.7104 | 0.7769 | 0.6544 |
66
- | 0.2327 | 0.7763 | 5000 | 0.2260 | 0.7202 | 0.7720 | 0.6748 |
67
- | 0.2244 | 0.8539 | 5500 | 0.2288 | 0.7215 | 0.7728 | 0.6766 |
68
- | 0.2194 | 0.9315 | 6000 | 0.2207 | 0.7215 | 0.7639 | 0.6836 |
69
- | 0.2202 | 1.0092 | 6500 | 0.2205 | 0.7238 | 0.7548 | 0.6953 |
70
- | 0.2099 | 1.0868 | 7000 | 0.2178 | 0.7322 | 0.7825 | 0.6880 |
71
- | 0.2031 | 1.1644 | 7500 | 0.2149 | 0.7328 | 0.7849 | 0.6872 |
72
- | 0.2051 | 1.2420 | 8000 | 0.2150 | 0.7370 | 0.7859 | 0.6938 |
73
- | 0.2011 | 1.3197 | 8500 | 0.2109 | 0.7354 | 0.7754 | 0.6993 |
74
- | 0.1966 | 1.3973 | 9000 | 0.2055 | 0.7363 | 0.7727 | 0.7033 |
75
- | 0.1988 | 1.4749 | 9500 | 0.2036 | 0.7377 | 0.7734 | 0.7051 |
76
- | 0.1952 | 1.5526 | 10000 | 0.2011 | 0.7385 | 0.7829 | 0.6989 |
77
- | 0.1921 | 1.6302 | 10500 | 0.2013 | 0.7409 | 0.7806 | 0.7051 |
78
- | 0.1957 | 1.7078 | 11000 | 0.2015 | 0.7461 | 0.7849 | 0.7109 |
79
- | 0.1862 | 1.7854 | 11500 | 0.1981 | 0.7451 | 0.7926 | 0.7029 |
80
- | 0.1902 | 1.8631 | 12000 | 0.2019 | 0.7444 | 0.7692 | 0.7212 |
81
- | 0.1886 | 1.9407 | 12500 | 0.1963 | 0.7468 | 0.7713 | 0.7237 |
82
- | 0.1759 | 2.0183 | 13000 | 0.1998 | 0.7454 | 0.7664 | 0.7255 |
83
- | 0.169 | 2.0959 | 13500 | 0.1991 | 0.7484 | 0.7762 | 0.7226 |
84
- | 0.1798 | 2.1736 | 14000 | 0.1935 | 0.7537 | 0.7766 | 0.7321 |
85
- | 0.1661 | 2.2512 | 14500 | 0.1944 | 0.7531 | 0.7850 | 0.7237 |
86
- | 0.1681 | 2.3288 | 15000 | 0.1930 | 0.7526 | 0.7755 | 0.7310 |
87
- | 0.1697 | 2.4065 | 15500 | 0.1938 | 0.7561 | 0.7818 | 0.7321 |
88
- | 0.1722 | 2.4841 | 16000 | 0.1951 | 0.7541 | 0.7758 | 0.7336 |
89
- | 0.1639 | 2.5617 | 16500 | 0.1925 | 0.7517 | 0.7853 | 0.7208 |
90
- | 0.1622 | 2.6393 | 17000 | 0.1925 | 0.7620 | 0.7918 | 0.7343 |
91
- | 0.1685 | 2.7170 | 17500 | 0.1932 | 0.7602 | 0.7881 | 0.7343 |
92
- | 0.1663 | 2.7946 | 18000 | 0.1919 | 0.7608 | 0.7880 | 0.7354 |
93
- | 0.1671 | 2.8722 | 18500 | 0.1916 | 0.7616 | 0.7901 | 0.7350 |
94
- | 0.1684 | 2.9499 | 19000 | 0.1893 | 0.7606 | 0.7932 | 0.7307 |
95
 
96
 
97
  ### Framework versions
98
 
99
  - Transformers 4.50.0
100
- - Pytorch 2.4.0a0+f70bd71a48.nv24.06
101
  - Datasets 2.21.0
102
  - Tokenizers 0.21.4
 
19
 
20
  This model is a fine-tuned version of [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.1318
23
+ - F1: 0.8327
24
+ - Precision: 0.8373
25
+ - Recall: 0.8281
26
 
27
  ## Model description
28
 
 
47
  - seed: 42
48
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
  - lr_scheduler_type: linear
50
+ - num_epochs: 4
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
56
  |:-------------:|:------:|:-----:|:---------------:|:------:|:---------:|:------:|
57
+ | 0.174 | 0.1553 | 1000 | 0.1766 | 0.7930 | 0.8137 | 0.7734 |
58
+ | 0.1408 | 0.3105 | 2000 | 0.1469 | 0.8030 | 0.8143 | 0.7920 |
59
+ | 0.1326 | 0.4658 | 3000 | 0.1313 | 0.8187 | 0.8399 | 0.7985 |
60
+ | 0.1283 | 0.6210 | 4000 | 0.1308 | 0.8169 | 0.8188 | 0.8150 |
61
+ | 0.1259 | 0.7763 | 5000 | 0.1270 | 0.8195 | 0.8325 | 0.8069 |
62
+ | 0.12 | 0.9315 | 6000 | 0.1224 | 0.8162 | 0.8272 | 0.8055 |
63
+ | 0.1072 | 1.0868 | 7000 | 0.1221 | 0.8215 | 0.82 | 0.8230 |
64
+ | 0.1068 | 1.2420 | 8000 | 0.1216 | 0.8208 | 0.8234 | 0.8182 |
65
+ | 0.1022 | 1.3973 | 9000 | 0.1256 | 0.8234 | 0.8188 | 0.8281 |
66
+ | 0.1034 | 1.5526 | 10000 | 0.1217 | 0.8267 | 0.8292 | 0.8241 |
67
+ | 0.1051 | 1.7078 | 11000 | 0.1203 | 0.8288 | 0.8435 | 0.8146 |
68
+ | 0.1011 | 1.8631 | 12000 | 0.1246 | 0.8299 | 0.8284 | 0.8314 |
69
+ | 0.0917 | 2.0183 | 13000 | 0.1266 | 0.8248 | 0.8274 | 0.8223 |
70
+ | 0.0887 | 2.1736 | 14000 | 0.1213 | 0.8261 | 0.8260 | 0.8263 |
71
+ | 0.0863 | 2.3288 | 15000 | 0.1255 | 0.8272 | 0.8263 | 0.8281 |
72
+ | 0.0897 | 2.4841 | 16000 | 0.1265 | 0.8210 | 0.8302 | 0.8120 |
73
+ | 0.0835 | 2.6393 | 17000 | 0.1233 | 0.8299 | 0.8284 | 0.8314 |
74
+ | 0.0833 | 2.7946 | 18000 | 0.1259 | 0.8341 | 0.8398 | 0.8285 |
75
+ | 0.0829 | 2.9499 | 19000 | 0.1189 | 0.8328 | 0.8397 | 0.8259 |
76
+ | 0.0704 | 3.1051 | 20000 | 0.1308 | 0.8302 | 0.8290 | 0.8314 |
77
+ | 0.073 | 3.2604 | 21000 | 0.1273 | 0.8296 | 0.8330 | 0.8263 |
78
+ | 0.0711 | 3.4156 | 22000 | 0.1335 | 0.8304 | 0.8399 | 0.8212 |
79
+ | 0.0695 | 3.5709 | 23000 | 0.1325 | 0.8283 | 0.8353 | 0.8215 |
80
+ | 0.0708 | 3.7261 | 24000 | 0.1316 | 0.8319 | 0.8384 | 0.8255 |
81
+ | 0.0706 | 3.8814 | 25000 | 0.1318 | 0.8327 | 0.8373 | 0.8281 |
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
 
84
  ### Framework versions
85
 
86
  - Transformers 4.50.0
87
+ - Pytorch 2.5.1+cu121
88
  - Datasets 2.21.0
89
  - Tokenizers 0.21.4
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f0445176d9ddec00b8a7feff4e95b604d3b2c527ac8aac9927aff32a2ffe0cc1
3
  size 435714132
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25440511b0659b4499246e05786fa27796695ace33e2d89688bd9e0b902661dd
3
  size 435714132
runs/Nov24_14-00-02_ca85e5befb5e/events.out.tfevents.1763993964.ca85e5befb5e.98539.4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72b4c4143ef3c924d58846ddfb106d3c6751918b11dbe675fa53ad3edb3a5110
3
+ size 516
runs/Nov25_11-14-11_ca85e5befb5e/events.out.tfevents.1764069252.ca85e5befb5e.27037.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56d6637641bcfde7b5befc9fd6a5ca80ec34fd62476f3d004daa62e5336a6261
3
+ size 44594
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1a32a15ba771074366b52d9971627ec36789aee3ffbc6ef8adfea8a9027e7634
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:513d07248fe181908a9a8c4b1ea0f22e8c198492382f0d4ae9afe4073cc88b31
3
  size 5368