kkmkorea commited on
Commit
143b1ef
·
1 Parent(s): c2c29c9

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -12
README.md CHANGED
@@ -1,6 +1,5 @@
1
  ---
2
  license: mit
3
- base_model: kisti/korscideberta
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -17,8 +16,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [kisti/korscideberta](https://huggingface.co/kisti/korscideberta) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.6488
21
- - Accuracy: 0.7860
22
 
23
  ## Model description
24
 
@@ -38,26 +37,25 @@ More information needed
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 1.5e-05
41
- - train_batch_size: 16
42
- - eval_batch_size: 8
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 500
47
  - num_epochs: 4
 
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
- | 0.9289 | 1.32 | 500 | 0.9097 | 0.6835 |
54
- | 0.5205 | 2.64 | 1000 | 0.7150 | 0.7593 |
55
- | 0.3888 | 3.96 | 1500 | 0.6488 | 0.7860 |
56
 
57
 
58
  ### Framework versions
59
 
60
- - Transformers 4.31.0
61
- - Pytorch 2.0.1+cu118
62
- - Datasets 2.14.4
63
- - Tokenizers 0.13.3
 
1
  ---
2
  license: mit
 
3
  tags:
4
  - generated_from_trainer
5
  metrics:
 
16
 
17
  This model is a fine-tuned version of [kisti/korscideberta](https://huggingface.co/kisti/korscideberta) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.7228
20
+ - Accuracy: 0.7370
21
 
22
  ## Model description
23
 
 
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 1.5e-05
40
+ - train_batch_size: 32
41
+ - eval_batch_size: 16
42
  - seed: 42
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 500
46
  - num_epochs: 4
47
+ - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
+ | 0.8395 | 2.63 | 500 | 0.7228 | 0.7370 |
 
 
54
 
55
 
56
  ### Framework versions
57
 
58
+ - Transformers 4.12.5
59
+ - Pytorch 1.10.1+cu102
60
+ - Datasets 2.14.6
61
+ - Tokenizers 0.10.3