TopSlayer commited on
Commit
13ed5f9
·
verified ·
1 Parent(s): 653c276

End of training

Browse files
Files changed (1) hide show
  1. README.md +9 -8
README.md CHANGED
@@ -33,16 +33,17 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 0.0003
37
- - train_batch_size: 16
38
  - eval_batch_size: 8
39
  - seed: 42
40
- - gradient_accumulation_steps: 2
41
  - total_train_batch_size: 32
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
- - lr_scheduler_warmup_steps: 500
45
- - num_epochs: 30
 
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
@@ -52,6 +53,6 @@ The following hyperparameters were used during training:
52
  ### Framework versions
53
 
54
  - Transformers 4.51.3
55
- - Pytorch 2.7.0+cu118
56
- - Datasets 3.6.0
57
- - Tokenizers 0.21.1
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 5e-05
37
+ - train_batch_size: 8
38
  - eval_batch_size: 8
39
  - seed: 42
40
+ - gradient_accumulation_steps: 4
41
  - total_train_batch_size: 32
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
+ - lr_scheduler_warmup_ratio: 0.1
45
+ - lr_scheduler_warmup_steps: 1000
46
+ - num_epochs: 60
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
 
53
  ### Framework versions
54
 
55
  - Transformers 4.51.3
56
+ - Pytorch 2.7.1+cu118
57
+ - Datasets 4.0.0
58
+ - Tokenizers 0.21.2