Swephoenix commited on
Commit
2429de1
·
verified ·
1 Parent(s): 520cbe7

Model save

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -37,12 +37,12 @@ The following hyperparameters were used during training:
37
  - train_batch_size: 4
38
  - eval_batch_size: 8
39
  - seed: 42
40
- - gradient_accumulation_steps: 4
41
- - total_train_batch_size: 16
42
- - optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_ratio: 0.03
45
- - num_epochs: 5
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
@@ -52,7 +52,7 @@ The following hyperparameters were used during training:
52
  ### Framework versions
53
 
54
  - PEFT 0.14.0
55
- - Transformers 4.48.3
56
  - Pytorch 2.6.0+cu124
57
- - Datasets 3.4.0
58
  - Tokenizers 0.21.0
 
37
  - train_batch_size: 4
38
  - eval_batch_size: 8
39
  - seed: 42
40
+ - gradient_accumulation_steps: 2
41
+ - total_train_batch_size: 8
42
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_ratio: 0.03
45
+ - num_epochs: 2
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
 
52
  ### Framework versions
53
 
54
  - PEFT 0.14.0
55
+ - Transformers 4.49.0
56
  - Pytorch 2.6.0+cu124
57
+ - Datasets 3.3.1
58
  - Tokenizers 0.21.0