besimray commited on
Commit
c624073
·
verified ·
1 Parent(s): 9fa9db3

End of training

Browse files
Files changed (2) hide show
  1. README.md +13 -13
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -58,7 +58,7 @@ max_steps: 10
58
  micro_batch_size: 7
59
  mlflow_experiment_name: mhenrichsen/alpaca_2k_test
60
  model_type: LlamaForCausalLM
61
- num_epochs: 4
62
  optimizer: adamw_bnb_8bit
63
  output_dir: miner_id_besimray
64
  pad_to_sequence_len: false
@@ -78,7 +78,7 @@ wandb_mode: online
78
  wandb_project: Public_TuningSN
79
  wandb_run: miner_id_24
80
  wandb_runid: 383a850e-bb15-45a2-8f4b-fc96eb001a74
81
- warmup_steps: 10
82
  weight_decay: 0.0
83
  xformers_attention: null
84
 
@@ -90,7 +90,7 @@ xformers_attention: null
90
 
91
  This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
92
  It achieves the following results on the evaluation set:
93
- - Loss: 1.2202
94
 
95
  ## Model description
96
 
@@ -117,7 +117,7 @@ The following hyperparameters were used during training:
117
  - total_train_batch_size: 28
118
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
119
  - lr_scheduler_type: cosine
120
- - lr_scheduler_warmup_steps: 10
121
  - training_steps: 10
122
 
123
  ### Training results
@@ -125,15 +125,15 @@ The following hyperparameters were used during training:
125
  | Training Loss | Epoch | Step | Validation Loss |
126
  |:-------------:|:------:|:----:|:---------------:|
127
  | 1.3327 | 0.0147 | 1 | 1.2694 |
128
- | 1.1887 | 0.0294 | 2 | 1.2705 |
129
- | 1.5717 | 0.0441 | 3 | 1.2656 |
130
- | 1.3113 | 0.0588 | 4 | 1.2619 |
131
- | 1.3671 | 0.0735 | 5 | 1.2536 |
132
- | 1.4151 | 0.0882 | 6 | 1.2436 |
133
- | 1.2607 | 0.1029 | 7 | 1.2301 |
134
- | 1.4189 | 0.1176 | 8 | 1.2256 |
135
- | 1.3843 | 0.1324 | 9 | 1.2237 |
136
- | 1.3753 | 0.1471 | 10 | 1.2202 |
137
 
138
 
139
  ### Framework versions
 
58
  micro_batch_size: 7
59
  mlflow_experiment_name: mhenrichsen/alpaca_2k_test
60
  model_type: LlamaForCausalLM
61
+ num_epochs: 20
62
  optimizer: adamw_bnb_8bit
63
  output_dir: miner_id_besimray
64
  pad_to_sequence_len: false
 
78
  wandb_project: Public_TuningSN
79
  wandb_run: miner_id_24
80
  wandb_runid: 383a850e-bb15-45a2-8f4b-fc96eb001a74
81
+ warmup_steps: 100
82
  weight_decay: 0.0
83
  xformers_attention: null
84
 
 
90
 
91
  This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
92
  It achieves the following results on the evaluation set:
93
+ - Loss: 1.2638
94
 
95
  ## Model description
96
 
 
117
  - total_train_batch_size: 28
118
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
119
  - lr_scheduler_type: cosine
120
+ - lr_scheduler_warmup_steps: 100
121
  - training_steps: 10
122
 
123
  ### Training results
 
125
  | Training Loss | Epoch | Step | Validation Loss |
126
  |:-------------:|:------:|:----:|:---------------:|
127
  | 1.3327 | 0.0147 | 1 | 1.2694 |
128
+ | 1.1887 | 0.0294 | 2 | 1.2676 |
129
+ | 1.5761 | 0.0441 | 3 | 1.2679 |
130
+ | 1.3197 | 0.0588 | 4 | 1.2693 |
131
+ | 1.3721 | 0.0735 | 5 | 1.2674 |
132
+ | 1.4327 | 0.0882 | 6 | 1.2674 |
133
+ | 1.2795 | 0.1029 | 7 | 1.2692 |
134
+ | 1.4695 | 0.1176 | 8 | 1.2674 |
135
+ | 1.4243 | 0.1324 | 9 | 1.2657 |
136
+ | 1.4099 | 0.1471 | 10 | 1.2638 |
137
 
138
 
139
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c2295b41ac661bb1f048c5ea31fe90887943731c0624ee1b94ce0b0510bf55c3
3
  size 67713738
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:339f69e146da45ed44b91d2e55f3389d74d840b0727be524af0a85b4f71ec13e
3
  size 67713738