HuyTran1301 commited on
Commit
e2eba99
·
verified ·
1 Parent(s): e0beada

Model save

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -34,11 +34,11 @@ More information needed
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 5e-05
37
- - train_batch_size: 6
38
  - eval_batch_size: 4
39
  - seed: 42
40
  - gradient_accumulation_steps: 32
41
- - total_train_batch_size: 192
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - num_epochs: 8
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 5e-05
37
+ - train_batch_size: 24
38
  - eval_batch_size: 4
39
  - seed: 42
40
  - gradient_accumulation_steps: 32
41
+ - total_train_batch_size: 768
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - num_epochs: 8