X1in commited on
Commit
240993e
·
verified ·
1 Parent(s): 78a6c49

Model save

Browse files
Files changed (2) hide show
  1. README.md +2 -2
  2. training_args.bin +1 -1
README.md CHANGED
@@ -40,8 +40,8 @@ The following hyperparameters were used during training:
40
  - gradient_accumulation_steps: 4
41
  - total_train_batch_size: 128
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
- - lr_scheduler_type: linear
44
- - lr_scheduler_warmup_steps: 500
45
  - num_epochs: 12
46
  - mixed_precision_training: Native AMP
47
 
 
40
  - gradient_accumulation_steps: 4
41
  - total_train_batch_size: 128
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
+ - lr_scheduler_type: cosine
44
+ - lr_scheduler_warmup_ratio: 0.1
45
  - num_epochs: 12
46
  - mixed_precision_training: Native AMP
47
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b0fa8fac518c87b55e70533ff7aa1c31b531daa087eeff8cf8a9304d001e9db3
3
  size 5969
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb13fb99ec22dc493bf012b618c2176ed5043913514dc869819071db8ff6a26b
3
  size 5969