besimray commited on
Commit
8237aba
·
verified ·
1 Parent(s): 252aed5

End of training

Browse files
Files changed (3) hide show
  1. README.md +10 -10
  2. adapter_model.bin +1 -1
  3. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -67,7 +67,7 @@ max_steps: 150
67
  micro_batch_size: 10
68
  mlflow_experiment_name: /tmp/MATH-Hard_train_data.json
69
  model_type: LlamaForCausalLM
70
- num_epochs: 5
71
  optimizer: adamw_bnb_8bit
72
  output_dir: miner_id_besimray
73
  pad_to_sequence_len: false
@@ -86,7 +86,7 @@ wandb_entity: besimray24-rayon
86
  wandb_mode: online
87
  wandb_project: Public_TuningSN
88
  wandb_run: miner_id_24
89
- wandb_runid: efa31d17-18f5-4448-b1b4-f65721354910
90
  warmup_steps: 10
91
  weight_decay: 0.01
92
  xformers_attention: null
@@ -99,7 +99,7 @@ xformers_attention: null
99
 
100
  This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
101
  It achieves the following results on the evaluation set:
102
- - Loss: 0.7629
103
 
104
  ## Model description
105
 
@@ -134,13 +134,13 @@ The following hyperparameters were used during training:
134
  | Training Loss | Epoch | Step | Validation Loss |
135
  |:-------------:|:------:|:----:|:---------------:|
136
  | 0.9595 | 0.0129 | 1 | 0.9746 |
137
- | 0.9359 | 0.2572 | 20 | 0.8169 |
138
- | 0.7895 | 0.5145 | 40 | 0.7905 |
139
- | 0.7524 | 0.7717 | 60 | 0.7785 |
140
- | 0.8127 | 1.0289 | 80 | 0.7698 |
141
- | 0.7258 | 1.2862 | 100 | 0.7665 |
142
- | 0.8212 | 1.5434 | 120 | 0.7636 |
143
- | 0.6807 | 1.8006 | 140 | 0.7629 |
144
 
145
 
146
  ### Framework versions
 
67
  micro_batch_size: 10
68
  mlflow_experiment_name: /tmp/MATH-Hard_train_data.json
69
  model_type: LlamaForCausalLM
70
+ num_epochs: 10
71
  optimizer: adamw_bnb_8bit
72
  output_dir: miner_id_besimray
73
  pad_to_sequence_len: false
 
86
  wandb_mode: online
87
  wandb_project: Public_TuningSN
88
  wandb_run: miner_id_24
89
+ wandb_runid: 3e895734-fc32-4a73-b997-346d471cdefc
90
  warmup_steps: 10
91
  weight_decay: 0.01
92
  xformers_attention: null
 
99
 
100
  This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
101
  It achieves the following results on the evaluation set:
102
+ - Loss: 0.7634
103
 
104
  ## Model description
105
 
 
134
  | Training Loss | Epoch | Step | Validation Loss |
135
  |:-------------:|:------:|:----:|:---------------:|
136
  | 0.9595 | 0.0129 | 1 | 0.9746 |
137
+ | 0.9389 | 0.2572 | 20 | 0.8177 |
138
+ | 0.7865 | 0.5145 | 40 | 0.7905 |
139
+ | 0.7565 | 0.7717 | 60 | 0.7795 |
140
+ | 0.8137 | 1.0289 | 80 | 0.7711 |
141
+ | 0.7243 | 1.2862 | 100 | 0.7675 |
142
+ | 0.8195 | 1.5434 | 120 | 0.7645 |
143
+ | 0.6793 | 1.8006 | 140 | 0.7634 |
144
 
145
 
146
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b27b15e9ee1833ae31432908afae3349855d2b7e578bcca141463bb3dcf1c209
3
  size 45169354
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da147b15838823004a0f5347d410eb528606bf55c3fa906cd13390d08d5201dd
3
  size 45169354
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ee822693fef528317ba083c5d48f88c30b9c61025611d616bab3d9d798072246
3
  size 45118424
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57f7b5b1793e66b4e6395cd45d0b7bf1995df458d6793623a7bd9e2c0d0b928b
3
  size 45118424