anpmts commited on
Commit
68e20e0
·
verified ·
1 Parent(s): 18f4715

Model save

Browse files
Files changed (3) hide show
  1. README.md +58 -11
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -21,13 +21,13 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.1918
25
- - Accuracy: 0.935
26
- - F1: 0.9350
27
- - Precision: 0.9349
28
- - Recall: 0.9351
29
- - F1 Negative: 0.9362
30
- - F1 Neutral: 0.9337
31
  - F1 Positive: 0.0
32
 
33
  ## Model description
@@ -49,18 +49,65 @@ More information needed
49
  The following hyperparameters were used during training:
50
  - learning_rate: 2e-05
51
  - train_batch_size: 64
52
- - eval_batch_size: 256
53
  - seed: 42
 
 
54
  - gradient_accumulation_steps: 2
55
- - total_train_batch_size: 128
 
56
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
57
- - lr_scheduler_type: linear
58
  - lr_scheduler_warmup_ratio: 0.1
59
- - num_epochs: 1
60
  - mixed_precision_training: Native AMP
61
 
62
  ### Training results
63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
 
66
  ### Framework versions
 
21
 
22
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.0820
25
+ - Accuracy: 0.9736
26
+ - F1: 0.9736
27
+ - Precision: 0.9736
28
+ - Recall: 0.9736
29
+ - F1 Negative: 0.9736
30
+ - F1 Neutral: 0.9736
31
  - F1 Positive: 0.0
32
 
33
  ## Model description
 
49
  The following hyperparameters were used during training:
50
  - learning_rate: 2e-05
51
  - train_batch_size: 64
52
+ - eval_batch_size: 128
53
  - seed: 42
54
+ - distributed_type: multi-GPU
55
+ - num_devices: 2
56
  - gradient_accumulation_steps: 2
57
+ - total_train_batch_size: 256
58
+ - total_eval_batch_size: 256
59
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
60
+ - lr_scheduler_type: cosine
61
  - lr_scheduler_warmup_ratio: 0.1
62
+ - num_epochs: 3
63
  - mixed_precision_training: Native AMP
64
 
65
  ### Training results
66
 
67
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | F1 Negative | F1 Neutral | F1 Positive |
68
+ |:-------------:|:------:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-----------:|:----------:|:-----------:|
69
+ | 0.1503 | 0.0711 | 1000 | 0.1399 | 0.9526 | 0.9526 | 0.9527 | 0.9526 | 0.9529 | 0.9524 | 0.0 |
70
+ | 0.123 | 0.1422 | 2000 | 0.1169 | 0.9613 | 0.9613 | 0.9613 | 0.9613 | 0.9612 | 0.9614 | 0.0 |
71
+ | 0.1085 | 0.2133 | 3000 | 0.1103 | 0.9623 | 0.9623 | 0.9625 | 0.9623 | 0.9619 | 0.9627 | 0.0 |
72
+ | 0.1071 | 0.2845 | 4000 | 0.0997 | 0.9658 | 0.9658 | 0.9658 | 0.9658 | 0.9656 | 0.9659 | 0.0 |
73
+ | 0.1048 | 0.3556 | 5000 | 0.0973 | 0.9669 | 0.9669 | 0.9670 | 0.9669 | 0.9671 | 0.9668 | 0.0 |
74
+ | 0.0952 | 0.4267 | 6000 | 0.1002 | 0.9674 | 0.9674 | 0.9674 | 0.9674 | 0.9676 | 0.9671 | 0.0 |
75
+ | 0.098 | 0.4978 | 7000 | 0.0952 | 0.9689 | 0.9689 | 0.9689 | 0.9689 | 0.9689 | 0.9689 | 0.0 |
76
+ | 0.0967 | 0.5689 | 8000 | 0.0930 | 0.9689 | 0.9689 | 0.9689 | 0.9689 | 0.9689 | 0.9690 | 0.0 |
77
+ | 0.0936 | 0.6400 | 9000 | 0.0926 | 0.9693 | 0.9693 | 0.9694 | 0.9693 | 0.9691 | 0.9695 | 0.0 |
78
+ | 0.0904 | 0.7111 | 10000 | 0.0946 | 0.9691 | 0.9691 | 0.9693 | 0.9691 | 0.9689 | 0.9694 | 0.0 |
79
+ | 0.0943 | 0.7823 | 11000 | 0.0880 | 0.9700 | 0.9700 | 0.9700 | 0.9700 | 0.9698 | 0.9701 | 0.0 |
80
+ | 0.0921 | 0.8534 | 12000 | 0.0867 | 0.9703 | 0.9703 | 0.9703 | 0.9703 | 0.9701 | 0.9704 | 0.0 |
81
+ | 0.0867 | 0.9245 | 13000 | 0.0878 | 0.9704 | 0.9704 | 0.9704 | 0.9704 | 0.9702 | 0.9706 | 0.0 |
82
+ | 0.0863 | 0.9956 | 14000 | 0.0871 | 0.9707 | 0.9707 | 0.9707 | 0.9707 | 0.9706 | 0.9708 | 0.0 |
83
+ | 0.0798 | 1.0667 | 15000 | 0.0883 | 0.9709 | 0.9709 | 0.9710 | 0.9709 | 0.9710 | 0.9709 | 0.0 |
84
+ | 0.0772 | 1.1378 | 16000 | 0.0871 | 0.9711 | 0.9711 | 0.9712 | 0.9711 | 0.9710 | 0.9712 | 0.0 |
85
+ | 0.0759 | 1.2089 | 17000 | 0.0884 | 0.9719 | 0.9719 | 0.9719 | 0.9719 | 0.9719 | 0.9719 | 0.0 |
86
+ | 0.0767 | 1.2800 | 18000 | 0.0857 | 0.9717 | 0.9717 | 0.9717 | 0.9717 | 0.9715 | 0.9718 | 0.0 |
87
+ | 0.0791 | 1.3512 | 19000 | 0.0870 | 0.9718 | 0.9718 | 0.9718 | 0.9718 | 0.9717 | 0.9719 | 0.0 |
88
+ | 0.0766 | 1.4223 | 20000 | 0.0827 | 0.9722 | 0.9722 | 0.9722 | 0.9722 | 0.9722 | 0.9721 | 0.0 |
89
+ | 0.0808 | 1.4934 | 21000 | 0.0829 | 0.9725 | 0.9725 | 0.9725 | 0.9725 | 0.9725 | 0.9725 | 0.0 |
90
+ | 0.0784 | 1.5645 | 22000 | 0.0824 | 0.9726 | 0.9726 | 0.9726 | 0.9726 | 0.9726 | 0.9725 | 0.0 |
91
+ | 0.0814 | 1.6356 | 23000 | 0.0811 | 0.9727 | 0.9727 | 0.9727 | 0.9727 | 0.9727 | 0.9727 | 0.0 |
92
+ | 0.0789 | 1.7067 | 24000 | 0.0825 | 0.9727 | 0.9727 | 0.9727 | 0.9727 | 0.9727 | 0.9727 | 0.0 |
93
+ | 0.0762 | 1.7778 | 25000 | 0.0806 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.0 |
94
+ | 0.0766 | 1.8490 | 26000 | 0.0813 | 0.9732 | 0.9732 | 0.9732 | 0.9732 | 0.9731 | 0.9732 | 0.0 |
95
+ | 0.0764 | 1.9201 | 27000 | 0.0825 | 0.9728 | 0.9728 | 0.9729 | 0.9728 | 0.9727 | 0.9730 | 0.0 |
96
+ | 0.0737 | 1.9912 | 28000 | 0.0818 | 0.9732 | 0.9732 | 0.9732 | 0.9732 | 0.9733 | 0.9730 | 0.0 |
97
+ | 0.0644 | 2.0623 | 29000 | 0.0835 | 0.9733 | 0.9733 | 0.9733 | 0.9733 | 0.9732 | 0.9733 | 0.0 |
98
+ | 0.0678 | 2.1334 | 30000 | 0.0841 | 0.9732 | 0.9732 | 0.9732 | 0.9732 | 0.9732 | 0.9732 | 0.0 |
99
+ | 0.0653 | 2.2045 | 31000 | 0.0842 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.0 |
100
+ | 0.0639 | 2.2756 | 32000 | 0.0827 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9735 | 0.0 |
101
+ | 0.0617 | 2.3468 | 33000 | 0.0835 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9733 | 0.9734 | 0.0 |
102
+ | 0.0645 | 2.4179 | 34000 | 0.0824 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9735 | 0.9734 | 0.0 |
103
+ | 0.0623 | 2.4890 | 35000 | 0.0827 | 0.9733 | 0.9733 | 0.9734 | 0.9733 | 0.9733 | 0.9734 | 0.0 |
104
+ | 0.0602 | 2.5601 | 36000 | 0.0833 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.0 |
105
+ | 0.0625 | 2.6312 | 37000 | 0.0830 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9733 | 0.9734 | 0.0 |
106
+ | 0.0649 | 2.7023 | 38000 | 0.0825 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9734 | 0.9735 | 0.0 |
107
+ | 0.0574 | 2.7734 | 39000 | 0.0828 | 0.9735 | 0.9735 | 0.9735 | 0.9735 | 0.9735 | 0.9735 | 0.0 |
108
+ | 0.0632 | 2.8445 | 40000 | 0.0821 | 0.9736 | 0.9736 | 0.9736 | 0.9736 | 0.9736 | 0.9736 | 0.0 |
109
+ | 0.0643 | 2.9157 | 41000 | 0.0820 | 0.9736 | 0.9736 | 0.9736 | 0.9736 | 0.9736 | 0.9736 | 0.0 |
110
+ | 0.0605 | 2.9868 | 42000 | 0.0820 | 0.9736 | 0.9736 | 0.9736 | 0.9736 | 0.9736 | 0.9736 | 0.0 |
111
 
112
 
113
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b968e5c92b1ac01d2f2edcdff4511a9abc0b158c7e1619abbcb666c49de7815a
3
  size 1112208084
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb362a929226476a7f9e1765ccda62e9ce99ff8c794dc2022e7a8b0413f9fd80
3
  size 1112208084
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:921468eb6be6c6a95fbe67ab078968757c8196ee5854b2c20fb456c3aed001ee
3
  size 5841
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c7e5bc10a862bcc90a0584d84beb3af7f7500a78457fd9002216a3c75ed1821
3
  size 5841