c-ho commited on
Commit
0dc01ea
·
verified ·
1 Parent(s): 8884df9

c-ho/academic_main_text_classifier_de

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -21,11 +21,11 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.2714
25
- - Accuracy: 0.9193
26
- - Precision: 0.9193
27
- - Recall: 0.9193
28
- - F1: 0.9193
29
 
30
  ## Model description
31
 
@@ -45,7 +45,7 @@ More information needed
45
 
46
  The following hyperparameters were used during training:
47
  - learning_rate: 1e-05
48
- - train_batch_size: 64
49
  - eval_batch_size: 64
50
  - seed: 42
51
  - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
@@ -57,11 +57,11 @@ The following hyperparameters were used during training:
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
59
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
60
- | No log | 1.0 | 134 | 0.7912 | 0.7189 | 0.7189 | 0.7189 | 0.7189 |
61
- | 1.0163 | 2.0 | 268 | 0.4568 | 0.8517 | 0.8517 | 0.8517 | 0.8517 |
62
- | 0.5157 | 3.0 | 402 | 0.3827 | 0.8704 | 0.8704 | 0.8704 | 0.8704 |
63
- | 0.5157 | 4.0 | 536 | 0.2817 | 0.9100 | 0.9100 | 0.9100 | 0.9100 |
64
- | 0.3219 | 5.0 | 670 | 0.2714 | 0.9193 | 0.9193 | 0.9193 | 0.9193 |
65
 
66
 
67
  ### Framework versions
 
21
 
22
  This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.2806
25
+ - Accuracy: 0.9147
26
+ - Precision: 0.9147
27
+ - Recall: 0.9147
28
+ - F1: 0.9147
29
 
30
  ## Model description
31
 
 
45
 
46
  The following hyperparameters were used during training:
47
  - learning_rate: 1e-05
48
+ - train_batch_size: 32
49
  - eval_batch_size: 64
50
  - seed: 42
51
  - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
 
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
59
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
60
+ | 1.1286 | 1.0 | 268 | 0.5176 | 0.8354 | 0.8354 | 0.8354 | 0.8354 |
61
+ | 0.5628 | 2.0 | 536 | 0.3287 | 0.8998 | 0.8998 | 0.8998 | 0.8998 |
62
+ | 0.3064 | 3.0 | 804 | 0.2835 | 0.9091 | 0.9091 | 0.9091 | 0.9091 |
63
+ | 0.2472 | 4.0 | 1072 | 0.2922 | 0.9152 | 0.9152 | 0.9152 | 0.9152 |
64
+ | 0.2236 | 5.0 | 1340 | 0.2806 | 0.9147 | 0.9147 | 0.9147 | 0.9147 |
65
 
66
 
67
  ### Framework versions