dung1308 commited on
Commit
b8ec800
·
1 Parent(s): 25f90f4

Training in progress epoch 0

Browse files
Files changed (2) hide show
  1. README.md +7 -9
  2. tf_model.h5 +1 -1
README.md CHANGED
@@ -13,9 +13,9 @@ probably proofread and complete it, then remove this comment. -->
13
 
14
  This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
- - Train Loss: 4.2959
17
- - Validation Loss: 4.2424
18
- - Epoch: 2
19
 
20
  ## Model description
21
 
@@ -34,21 +34,19 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -356, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
38
- - training_precision: mixed_float16
39
 
40
  ### Training results
41
 
42
  | Train Loss | Validation Loss | Epoch |
43
  |:----------:|:---------------:|:-----:|
44
- | 5.2260 | 4.5287 | 0 |
45
- | 4.3544 | 4.3212 | 1 |
46
- | 4.2959 | 4.2424 | 2 |
47
 
48
 
49
  ### Framework versions
50
 
51
  - Transformers 4.18.0
52
- - TensorFlow 2.8.0
53
  - Datasets 2.7.0
54
  - Tokenizers 0.11.0
 
13
 
14
  This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
+ - Train Loss: 5.5981
17
+ - Validation Loss: 4.6262
18
+ - Epoch: 0
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -687, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
38
+ - training_precision: float32
39
 
40
  ### Training results
41
 
42
  | Train Loss | Validation Loss | Epoch |
43
  |:----------:|:---------------:|:-----:|
44
+ | 5.5981 | 4.6262 | 0 |
 
 
45
 
46
 
47
  ### Framework versions
48
 
49
  - Transformers 4.18.0
50
+ - TensorFlow 2.10.1
51
  - Datasets 2.7.0
52
  - Tokenizers 0.11.0
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:77988eccb4b335fd49a2110bae779157c452c65ec72ffdad45ef334e256366ed
3
  size 737945212
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce96b892de4f324bb1e811a3951f4518bc0f7f76f2c239bb5fe2e2306eddd75d
3
  size 737945212