minpeter commited on
Commit
9cd5dcc
·
verified ·
1 Parent(s): a744799

End of training

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -50,12 +50,12 @@ save_steps: 200
50
  warmup_steps: 100
51
  eval_steps: 200
52
 
53
- sequence_len: 512
54
  sample_packing: true
55
  pad_to_sequence_len: true
56
 
57
  gradient_accumulation_steps: 4
58
- micro_batch_size: 56
59
 
60
  optimizer: paged_adamw_8bit
61
  lr_scheduler: cosine
@@ -91,7 +91,7 @@ weight_decay: 0.0
91
 
92
  This model is a fine-tuned version of [minpeter/pretrained-tiny-ko](https://huggingface.co/minpeter/pretrained-tiny-ko) on the lemon-mint/Korean-FineTome-100k and the lemon-mint/smol-koreantalk datasets.
93
  It achieves the following results on the evaluation set:
94
- - Loss: 3.6623
95
 
96
  ## Model description
97
 
@@ -111,24 +111,24 @@ More information needed
111
 
112
  The following hyperparameters were used during training:
113
  - learning_rate: 2e-05
114
- - train_batch_size: 56
115
- - eval_batch_size: 56
116
  - seed: 42
117
  - distributed_type: multi-GPU
118
  - num_devices: 4
119
  - gradient_accumulation_steps: 4
120
- - total_train_batch_size: 896
121
- - total_eval_batch_size: 224
122
  - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
123
  - lr_scheduler_type: cosine
124
  - lr_scheduler_warmup_steps: 100
125
- - training_steps: 48
126
 
127
  ### Training results
128
 
129
  | Training Loss | Epoch | Step | Validation Loss |
130
  |:-------------:|:------:|:----:|:---------------:|
131
- | 3.6868 | 0.0404 | 1 | 3.6623 |
132
 
133
 
134
  ### Framework versions
 
50
  warmup_steps: 100
51
  eval_steps: 200
52
 
53
+ sequence_len: 1024
54
  sample_packing: true
55
  pad_to_sequence_len: true
56
 
57
  gradient_accumulation_steps: 4
58
+ micro_batch_size: 32
59
 
60
  optimizer: paged_adamw_8bit
61
  lr_scheduler: cosine
 
91
 
92
  This model is a fine-tuned version of [minpeter/pretrained-tiny-ko](https://huggingface.co/minpeter/pretrained-tiny-ko) on the lemon-mint/Korean-FineTome-100k and the lemon-mint/smol-koreantalk datasets.
93
  It achieves the following results on the evaluation set:
94
+ - Loss: 3.6038
95
 
96
  ## Model description
97
 
 
111
 
112
  The following hyperparameters were used during training:
113
  - learning_rate: 2e-05
114
+ - train_batch_size: 32
115
+ - eval_batch_size: 32
116
  - seed: 42
117
  - distributed_type: multi-GPU
118
  - num_devices: 4
119
  - gradient_accumulation_steps: 4
120
+ - total_train_batch_size: 512
121
+ - total_eval_batch_size: 128
122
  - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
123
  - lr_scheduler_type: cosine
124
  - lr_scheduler_warmup_steps: 100
125
+ - training_steps: 102
126
 
127
  ### Training results
128
 
129
  | Training Loss | Epoch | Step | Validation Loss |
130
  |:-------------:|:------:|:----:|:---------------:|
131
+ | 3.5674 | 0.0193 | 1 | 3.6038 |
132
 
133
 
134
  ### Framework versions