miladalsh commited on
Commit
ad28609
·
verified ·
1 Parent(s): 695c4ae

Model save

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -41,9 +41,9 @@ The following hyperparameters were used during training:
41
  - train_batch_size: 1
42
  - eval_batch_size: 8
43
  - seed: 42
44
- - gradient_accumulation_steps: 2
45
- - total_train_batch_size: 2
46
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: constant
48
  - lr_scheduler_warmup_ratio: 0.03
49
  - num_epochs: 1
@@ -55,7 +55,7 @@ The following hyperparameters were used during training:
55
  ### Framework versions
56
 
57
  - PEFT 0.12.0
58
- - Transformers 4.40.2
59
- - Pytorch 2.4.0+cu121
60
  - Datasets 2.21.0
61
- - Tokenizers 0.19.1
 
41
  - train_batch_size: 1
42
  - eval_batch_size: 8
43
  - seed: 42
44
+ - gradient_accumulation_steps: 4
45
+ - total_train_batch_size: 4
46
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: constant
48
  - lr_scheduler_warmup_ratio: 0.03
49
  - num_epochs: 1
 
55
  ### Framework versions
56
 
57
  - PEFT 0.12.0
58
+ - Transformers 4.46.2
59
+ - Pytorch 2.4.1
60
  - Datasets 2.21.0
61
+ - Tokenizers 0.20.1