End of training
Browse files- README.md +19 -20
- adapter_model.bin +1 -1
README.md
CHANGED
|
@@ -56,10 +56,10 @@ lora_r: 16
|
|
| 56 |
lora_target_linear: true
|
| 57 |
lr_scheduler: cosine
|
| 58 |
max_steps: 150
|
| 59 |
-
micro_batch_size:
|
| 60 |
mlflow_experiment_name: mhenrichsen/alpaca_2k_test
|
| 61 |
model_type: LlamaForCausalLM
|
| 62 |
-
num_epochs:
|
| 63 |
optimizer: adamw_bnb_8bit
|
| 64 |
output_dir: miner_id_besimray
|
| 65 |
pad_to_sequence_len: false
|
|
@@ -91,7 +91,7 @@ xformers_attention: null
|
|
| 91 |
|
| 92 |
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
|
| 93 |
It achieves the following results on the evaluation set:
|
| 94 |
-
- Loss: 1.
|
| 95 |
|
| 96 |
## Model description
|
| 97 |
|
|
@@ -111,31 +111,30 @@ More information needed
|
|
| 111 |
|
| 112 |
The following hyperparameters were used during training:
|
| 113 |
- learning_rate: 0.0002
|
| 114 |
-
- train_batch_size:
|
| 115 |
-
- eval_batch_size:
|
| 116 |
- seed: 42
|
| 117 |
- gradient_accumulation_steps: 4
|
| 118 |
-
- total_train_batch_size:
|
| 119 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 120 |
- lr_scheduler_type: cosine
|
| 121 |
- lr_scheduler_warmup_steps: 10
|
| 122 |
-
- training_steps:
|
| 123 |
|
| 124 |
### Training results
|
| 125 |
|
| 126 |
-
| Training Loss | Epoch
|
| 127 |
-
|:-------------:|:-----
|
| 128 |
-
| 1.
|
| 129 |
-
| 1.
|
| 130 |
-
| 1.
|
| 131 |
-
| 1.
|
| 132 |
-
| 1.
|
| 133 |
-
| 1.
|
| 134 |
-
| 1.
|
| 135 |
-
| 1.
|
| 136 |
-
| 1.
|
| 137 |
-
| 1.
|
| 138 |
-
| 1.1377 | 2.8095 | 30 | 1.1380 |
|
| 139 |
|
| 140 |
|
| 141 |
### Framework versions
|
|
|
|
| 56 |
lora_target_linear: true
|
| 57 |
lr_scheduler: cosine
|
| 58 |
max_steps: 150
|
| 59 |
+
micro_batch_size: 10
|
| 60 |
mlflow_experiment_name: mhenrichsen/alpaca_2k_test
|
| 61 |
model_type: LlamaForCausalLM
|
| 62 |
+
num_epochs: 5
|
| 63 |
optimizer: adamw_bnb_8bit
|
| 64 |
output_dir: miner_id_besimray
|
| 65 |
pad_to_sequence_len: false
|
|
|
|
| 91 |
|
| 92 |
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
|
| 93 |
It achieves the following results on the evaluation set:
|
| 94 |
+
- Loss: 1.1848
|
| 95 |
|
| 96 |
## Model description
|
| 97 |
|
|
|
|
| 111 |
|
| 112 |
The following hyperparameters were used during training:
|
| 113 |
- learning_rate: 0.0002
|
| 114 |
+
- train_batch_size: 10
|
| 115 |
+
- eval_batch_size: 10
|
| 116 |
- seed: 42
|
| 117 |
- gradient_accumulation_steps: 4
|
| 118 |
+
- total_train_batch_size: 40
|
| 119 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 120 |
- lr_scheduler_type: cosine
|
| 121 |
- lr_scheduler_warmup_steps: 10
|
| 122 |
+
- training_steps: 10
|
| 123 |
|
| 124 |
### Training results
|
| 125 |
|
| 126 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
| 127 |
+
|:-------------:|:-----:|:----:|:---------------:|
|
| 128 |
+
| 1.2878 | 0.5 | 1 | 1.2576 |
|
| 129 |
+
| 1.294 | 1.0 | 2 | 1.2571 |
|
| 130 |
+
| 1.2719 | 1.375 | 3 | 1.2468 |
|
| 131 |
+
| 1.2869 | 1.875 | 4 | 1.2302 |
|
| 132 |
+
| 1.2828 | 2.25 | 5 | 1.2147 |
|
| 133 |
+
| 1.2449 | 2.75 | 6 | 1.2145 |
|
| 134 |
+
| 1.2385 | 3.125 | 7 | 1.2129 |
|
| 135 |
+
| 1.2142 | 3.625 | 8 | 1.2061 |
|
| 136 |
+
| 1.2725 | 4.125 | 9 | 1.1927 |
|
| 137 |
+
| 1.2282 | 4.5 | 10 | 1.1848 |
|
|
|
|
| 138 |
|
| 139 |
|
| 140 |
### Framework versions
|
adapter_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 45169354
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:663e8219297d65bc184eb8f4abcf0507f9efed03f9d35d5b9fc1a1f5ffdd8295
|
| 3 |
size 45169354
|