error577 commited on
Commit
23a9315
·
verified ·
1 Parent(s): e153bf4

End of training

Browse files
Files changed (2) hide show
  1. README.md +19 -19
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -47,7 +47,7 @@ flash_attention: true
47
  fp16:
48
  fsdp: null
49
  fsdp_config: null
50
- gradient_accumulation_steps: 4
51
  gradient_checkpointing: true
52
  group_by_length: false
53
  hub_model_id: error577/9bbe02f2-64a5-465f-bf4c-4ea2695fbef3
@@ -105,7 +105,7 @@ special_tokens:
105
 
106
  This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
107
  It achieves the following results on the evaluation set:
108
- - Loss: 1.2247
109
 
110
  ## Model description
111
 
@@ -128,8 +128,8 @@ The following hyperparameters were used during training:
128
  - train_batch_size: 2
129
  - eval_batch_size: 2
130
  - seed: 42
131
- - gradient_accumulation_steps: 4
132
- - total_train_batch_size: 8
133
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
134
  - lr_scheduler_type: cosine
135
  - lr_scheduler_warmup_steps: 10
@@ -139,21 +139,21 @@ The following hyperparameters were used during training:
139
 
140
  | Training Loss | Epoch | Step | Validation Loss |
141
  |:-------------:|:------:|:----:|:---------------:|
142
- | 1.4083 | 0.0003 | 1 | 1.6697 |
143
- | 1.8789 | 0.0019 | 7 | 1.6074 |
144
- | 1.438 | 0.0037 | 14 | 1.4139 |
145
- | 1.1207 | 0.0056 | 21 | 1.3271 |
146
- | 1.4281 | 0.0075 | 28 | 1.2948 |
147
- | 1.0291 | 0.0093 | 35 | 1.2702 |
148
- | 1.2272 | 0.0112 | 42 | 1.2539 |
149
- | 1.1606 | 0.0131 | 49 | 1.2439 |
150
- | 1.2959 | 0.0149 | 56 | 1.2395 |
151
- | 1.3003 | 0.0168 | 63 | 1.2340 |
152
- | 1.4029 | 0.0187 | 70 | 1.2299 |
153
- | 1.2938 | 0.0205 | 77 | 1.2267 |
154
- | 1.164 | 0.0224 | 84 | 1.2257 |
155
- | 0.9039 | 0.0243 | 91 | 1.2248 |
156
- | 1.0404 | 0.0262 | 98 | 1.2247 |
157
 
158
 
159
  ### Framework versions
 
47
  fp16:
48
  fsdp: null
49
  fsdp_config: null
50
+ gradient_accumulation_steps: 8
51
  gradient_checkpointing: true
52
  group_by_length: false
53
  hub_model_id: error577/9bbe02f2-64a5-465f-bf4c-4ea2695fbef3
 
105
 
106
  This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
107
  It achieves the following results on the evaluation set:
108
+ - Loss: 1.2094
109
 
110
  ## Model description
111
 
 
128
  - train_batch_size: 2
129
  - eval_batch_size: 2
130
  - seed: 42
131
+ - gradient_accumulation_steps: 8
132
+ - total_train_batch_size: 16
133
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
134
  - lr_scheduler_type: cosine
135
  - lr_scheduler_warmup_steps: 10
 
139
 
140
  | Training Loss | Epoch | Step | Validation Loss |
141
  |:-------------:|:------:|:----:|:---------------:|
142
+ | 1.3023 | 0.0005 | 1 | 1.6697 |
143
+ | 1.848 | 0.0037 | 7 | 1.5881 |
144
+ | 1.3956 | 0.0075 | 14 | 1.3877 |
145
+ | 1.2391 | 0.0112 | 21 | 1.3272 |
146
+ | 1.497 | 0.0149 | 28 | 1.2778 |
147
+ | 1.4533 | 0.0187 | 35 | 1.2552 |
148
+ | 1.2165 | 0.0224 | 42 | 1.2408 |
149
+ | 1.1767 | 0.0262 | 49 | 1.2297 |
150
+ | 0.9731 | 0.0299 | 56 | 1.2223 |
151
+ | 0.8316 | 0.0336 | 63 | 1.2189 |
152
+ | 1.3272 | 0.0374 | 70 | 1.2140 |
153
+ | 1.1467 | 0.0411 | 77 | 1.2112 |
154
+ | 1.2043 | 0.0448 | 84 | 1.2099 |
155
+ | 1.3629 | 0.0486 | 91 | 1.2091 |
156
+ | 1.2862 | 0.0523 | 98 | 1.2094 |
157
 
158
 
159
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:20b04f9c744cfb674008e7198571b9510954c7b0dd39347c5f83bfbb7218753b
3
  size 90258378
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88d5954f2509570169eddadaabfd20fdc35423f33169dccd02eedb050baf578c
3
  size 90258378