dtorber commited on
Commit
f78761d
·
verified ·
1 Parent(s): 0536368

Model save

Browse files
Files changed (2) hide show
  1. README.md +27 -22
  2. model.safetensors +1 -1
README.md CHANGED
@@ -16,16 +16,16 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # bert-base-multilingual-cased
18
 
19
- This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.9372
22
- - F1 Macro: 0.8644
23
- - F1: 0.9023
24
- - F1 Neg: 0.8264
25
- - Acc: 0.875
26
- - Prec: 0.8919
27
- - Recall: 0.9130
28
- - Mcc: 0.7292
29
 
30
  ## Model description
31
 
@@ -51,23 +51,28 @@ The following hyperparameters were used during training:
51
  - distributed_type: multi-GPU
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
- - num_epochs: 10
55
  - mixed_precision_training: Native AMP
56
 
57
  ### Training results
58
 
59
- | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 | F1 Neg | Acc | Prec | Recall | Mcc |
60
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:|:------:|:------:|:------:|
61
- | No log | 1.0 | 400 | 0.6220 | 0.6711 | 0.8372 | 0.5051 | 0.755 | 0.7241 | 0.9921 | 0.4790 |
62
- | 0.492 | 2.0 | 800 | 0.4937 | 0.8112 | 0.8782 | 0.7442 | 0.835 | 0.8264 | 0.9370 | 0.6375 |
63
- | 0.3428 | 3.0 | 1200 | 0.5508 | 0.8341 | 0.8855 | 0.7826 | 0.85 | 0.8593 | 0.9134 | 0.6713 |
64
- | 0.2386 | 4.0 | 1600 | 0.7895 | 0.8255 | 0.8800 | 0.7709 | 0.8425 | 0.8524 | 0.9094 | 0.6545 |
65
- | 0.1347 | 5.0 | 2000 | 0.9012 | 0.8267 | 0.8830 | 0.7704 | 0.845 | 0.8478 | 0.9213 | 0.6595 |
66
- | 0.1347 | 6.0 | 2400 | 1.1302 | 0.8152 | 0.8577 | 0.7727 | 0.825 | 0.8866 | 0.8307 | 0.6333 |
67
- | 0.0523 | 7.0 | 2800 | 1.2044 | 0.8360 | 0.8880 | 0.7839 | 0.8525 | 0.8571 | 0.9213 | 0.6765 |
68
- | 0.016 | 8.0 | 3200 | 1.2032 | 0.8346 | 0.8851 | 0.7842 | 0.85 | 0.8619 | 0.9094 | 0.6717 |
69
- | 0.0077 | 9.0 | 3600 | 1.2762 | 0.8236 | 0.8814 | 0.7658 | 0.8425 | 0.8448 | 0.9213 | 0.6539 |
70
- | 0.0004 | 10.0 | 4000 | 1.2740 | 0.8360 | 0.8880 | 0.7839 | 0.8525 | 0.8571 | 0.9213 | 0.6765 |
 
 
 
 
 
71
 
72
 
73
  ### Framework versions
 
16
 
17
  # bert-base-multilingual-cased
18
 
19
+ This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.3054
22
+ - F1 Macro: 0.8821
23
+ - F1: 0.9191
24
+ - F1 Neg: 0.8452
25
+ - Acc: 0.8938
26
+ - Prec: 0.9130
27
+ - Recall: 0.9253
28
+ - Mcc: 0.7645
29
 
30
  ## Model description
31
 
 
51
  - distributed_type: multi-GPU
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
+ - num_epochs: 15
55
  - mixed_precision_training: Native AMP
56
 
57
  ### Training results
58
 
59
+ | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 | F1 Neg | Acc | Prec | Recall | Mcc |
60
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:------:|:------:|:------:|:------:|
61
+ | 0.505 | 1.0 | 800 | 0.3804 | 0.8285 | 0.8856 | 0.7715 | 0.8475 | 0.8489 | 0.9255 | 0.6639 |
62
+ | 0.3286 | 2.0 | 1600 | 0.3744 | 0.8564 | 0.9006 | 0.8123 | 0.87 | 0.8787 | 0.9235 | 0.7150 |
63
+ | 0.2764 | 3.0 | 2400 | 0.6199 | 0.8398 | 0.8949 | 0.7848 | 0.8588 | 0.8513 | 0.9431 | 0.6897 |
64
+ | 0.1761 | 4.0 | 3200 | 0.6083 | 0.8506 | 0.8990 | 0.8022 | 0.8662 | 0.8670 | 0.9333 | 0.7061 |
65
+ | 0.1235 | 5.0 | 4000 | 0.8350 | 0.8516 | 0.8962 | 0.8071 | 0.865 | 0.8792 | 0.9137 | 0.7046 |
66
+ | 0.0508 | 6.0 | 4800 | 0.8916 | 0.8425 | 0.8822 | 0.8027 | 0.8525 | 0.8984 | 0.8667 | 0.6859 |
67
+ | 0.0386 | 7.0 | 5600 | 1.0909 | 0.8488 | 0.8983 | 0.7993 | 0.865 | 0.8641 | 0.9353 | 0.7033 |
68
+ | 0.0255 | 8.0 | 6400 | 1.1529 | 0.8464 | 0.8853 | 0.8074 | 0.8562 | 0.9006 | 0.8706 | 0.6936 |
69
+ | 0.0276 | 9.0 | 7200 | 1.2287 | 0.8499 | 0.8955 | 0.8043 | 0.8638 | 0.8762 | 0.9157 | 0.7015 |
70
+ | 0.0229 | 10.0 | 8000 | 1.1899 | 0.8440 | 0.8854 | 0.8027 | 0.855 | 0.8924 | 0.8784 | 0.6883 |
71
+ | 0.0127 | 11.0 | 8800 | 1.2131 | 0.8521 | 0.8958 | 0.8085 | 0.865 | 0.8821 | 0.9098 | 0.7051 |
72
+ | 0.0091 | 12.0 | 9600 | 1.3365 | 0.8515 | 0.8896 | 0.8134 | 0.8612 | 0.9030 | 0.8765 | 0.7037 |
73
+ | 0.0073 | 13.0 | 10400 | 1.3723 | 0.8492 | 0.8895 | 0.8089 | 0.86 | 0.8948 | 0.8843 | 0.6985 |
74
+ | 0.0083 | 14.0 | 11200 | 1.4210 | 0.8465 | 0.8876 | 0.8055 | 0.8575 | 0.8929 | 0.8824 | 0.6931 |
75
+ | 0.0 | 15.0 | 12000 | 1.4425 | 0.8434 | 0.8882 | 0.7986 | 0.8562 | 0.8805 | 0.8961 | 0.6871 |
76
 
77
 
78
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ac57ae8db07d3e254caef4442281ede7e1bf51bb580c3bdb65ed672d746f2f5d
3
  size 711443456
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73f071b978e9ea8d51518111762369fe22b604f8daecb1a0ef6982827af97bbe
3
  size 711443456