Ohjunghyun commited on
Commit
93dd039
·
verified ·
1 Parent(s): 8a56b0c

Upload TFBertForSequenceClassification

Browse files
Files changed (3) hide show
  1. README.md +9 -12
  2. config.json +1 -5
  3. tf_model.h5 +1 -1
README.md CHANGED
@@ -16,11 +16,11 @@ probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Train Loss: 0.0251
20
- - Train Accuracy: 0.9928
21
- - Validation Loss: 0.5767
22
- - Validation Accuracy: 0.8752
23
- - Epoch: 4
24
 
25
  ## Model description
26
 
@@ -39,22 +39,19 @@ More information needed
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1058, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 117, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.1}
43
  - training_precision: float32
44
 
45
  ### Training results
46
 
47
  | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
48
  |:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
49
- | 0.4002 | 0.8104 | 0.3196 | 0.8654 | 0 |
50
- | 0.2221 | 0.9113 | 0.3135 | 0.8688 | 1 |
51
- | 0.1053 | 0.9627 | 0.4027 | 0.8730 | 2 |
52
- | 0.0459 | 0.9857 | 0.5210 | 0.8700 | 3 |
53
- | 0.0251 | 0.9928 | 0.5767 | 0.8752 | 4 |
54
 
55
 
56
  ### Framework versions
57
 
58
- - Transformers 4.51.1
59
  - TensorFlow 2.18.0
60
  - Tokenizers 0.21.1
 
16
 
17
  This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Train Loss: 0.1942
20
+ - Train Accuracy: 0.9247
21
+ - Validation Loss: 0.3159
22
+ - Validation Accuracy: 0.8760
23
+ - Epoch: 1
24
 
25
  ## Model description
26
 
 
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 423, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 47, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.1}
43
  - training_precision: float32
44
 
45
  ### Training results
46
 
47
  | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
48
  |:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
49
+ | 0.3708 | 0.8309 | 0.3033 | 0.8726 | 0 |
50
+ | 0.1942 | 0.9247 | 0.3159 | 0.8760 | 1 |
 
 
 
51
 
52
 
53
  ### Framework versions
54
 
55
+ - Transformers 4.51.3
56
  - TensorFlow 2.18.0
57
  - Tokenizers 0.21.1
config.json CHANGED
@@ -7,10 +7,6 @@
7
  "hidden_act": "gelu",
8
  "hidden_dropout_prob": 0.1,
9
  "hidden_size": 768,
10
- "id2label": {
11
- "0": "\ubd80\uc815",
12
- "1": "\uae0d\uc815"
13
- },
14
  "initializer_range": 0.02,
15
  "intermediate_size": 3072,
16
  "layer_norm_eps": 1e-12,
@@ -20,7 +16,7 @@
20
  "num_hidden_layers": 12,
21
  "pad_token_id": 0,
22
  "position_embedding_type": "absolute",
23
- "transformers_version": "4.51.1",
24
  "type_vocab_size": 2,
25
  "use_cache": true,
26
  "vocab_size": 32000
 
7
  "hidden_act": "gelu",
8
  "hidden_dropout_prob": 0.1,
9
  "hidden_size": 768,
 
 
 
 
10
  "initializer_range": 0.02,
11
  "intermediate_size": 3072,
12
  "layer_norm_eps": 1e-12,
 
16
  "num_hidden_layers": 12,
17
  "pad_token_id": 0,
18
  "position_embedding_type": "absolute",
19
+ "transformers_version": "4.51.3",
20
  "type_vocab_size": 2,
21
  "use_cache": true,
22
  "vocab_size": 32000
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85244e4e7fa75f557ecb247ac692b6cc858274cd503f1fd5b9d057a8146c6e09
3
  size 442763544
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:025ab30214618f00c36e6d3992506b42f0d6e34f02715c88ac37837578e6f3bb
3
  size 442763544