Masaki Eguchi commited on
Commit
1463a6f
·
1 Parent(s): 37a5250

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -15
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the elsevier-oa-cc-by dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 1.3831
20
 
21
  ## Model description
22
 
@@ -35,32 +35,52 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 0.0001
39
  - train_batch_size: 8
40
  - eval_batch_size: 8
41
  - seed: 42
42
- - gradient_accumulation_steps: 32
43
- - total_train_batch_size: 256
44
  - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.2
47
- - num_epochs: 10
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 1.4534 | 1.0 | 128 | 1.4511 |
55
- | 1.4322 | 2.0 | 256 | 1.4517 |
56
- | 1.4234 | 3.0 | 384 | 1.4379 |
57
- | 1.4025 | 4.0 | 512 | 1.4200 |
58
- | 1.3875 | 5.0 | 640 | 1.4147 |
59
- | 1.3782 | 6.0 | 768 | 1.4099 |
60
- | 1.3634 | 7.0 | 896 | 1.3971 |
61
- | 1.3565 | 8.0 | 1024 | 1.3992 |
62
- | 1.3474 | 9.0 | 1152 | 1.3897 |
63
- | 1.341 | 10.0 | 1280 | 1.3856 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
 
66
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the elsevier-oa-cc-by dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 1.2956
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 7e-05
39
  - train_batch_size: 8
40
  - eval_batch_size: 8
41
  - seed: 42
42
+ - gradient_accumulation_steps: 128
43
+ - total_train_batch_size: 1024
44
  - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.2
47
+ - num_epochs: 30
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 1.5522 | 0.99 | 31 | 1.4074 |
55
+ | 1.5314 | 1.99 | 62 | 1.3907 |
56
+ | 1.5157 | 2.99 | 93 | 1.3799 |
57
+ | 1.504 | 3.99 | 124 | 1.3777 |
58
+ | 1.489 | 4.99 | 155 | 1.3654 |
59
+ | 1.4778 | 5.99 | 186 | 1.3556 |
60
+ | 1.4674 | 6.99 | 217 | 1.3506 |
61
+ | 1.4552 | 7.99 | 248 | 1.3414 |
62
+ | 1.4474 | 8.99 | 279 | 1.3346 |
63
+ | 1.4396 | 9.99 | 310 | 1.3321 |
64
+ | 1.4284 | 10.99 | 341 | 1.3314 |
65
+ | 1.4191 | 11.99 | 372 | 1.3222 |
66
+ | 1.4146 | 12.99 | 403 | 1.3165 |
67
+ | 1.4067 | 13.99 | 434 | 1.3227 |
68
+ | 1.403 | 14.99 | 465 | 1.3175 |
69
+ | 1.399 | 15.99 | 496 | 1.3154 |
70
+ | 1.3901 | 16.99 | 527 | 1.3187 |
71
+ | 1.3891 | 17.99 | 558 | 1.3045 |
72
+ | 1.3838 | 18.99 | 589 | 1.2992 |
73
+ | 1.3804 | 19.99 | 620 | 1.2966 |
74
+ | 1.3792 | 20.99 | 651 | 1.3040 |
75
+ | 1.3735 | 21.99 | 682 | 1.2964 |
76
+ | 1.3685 | 22.99 | 713 | 1.2993 |
77
+ | 1.3697 | 23.99 | 744 | 1.2930 |
78
+ | 1.3636 | 24.99 | 775 | 1.2943 |
79
+ | 1.3653 | 25.99 | 806 | 1.2857 |
80
+ | 1.3623 | 26.99 | 837 | 1.2931 |
81
+ | 1.3584 | 27.99 | 868 | 1.2911 |
82
+ | 1.3577 | 28.99 | 899 | 1.2917 |
83
+ | 1.3573 | 29.99 | 930 | 1.2963 |
84
 
85
 
86
  ### Framework versions