End of training
Browse files- README.md +27 -25
- adapter_model.bin +2 -2
README.md
CHANGED
|
@@ -30,6 +30,7 @@ debug: null
|
|
| 30 |
deepspeed: null
|
| 31 |
early_stopping_patience: null
|
| 32 |
eval_max_new_tokens: 128
|
|
|
|
| 33 |
eval_table_size: null
|
| 34 |
evals_per_epoch: 4
|
| 35 |
flash_attention: true
|
|
@@ -42,29 +43,29 @@ group_by_length: false
|
|
| 42 |
hub_model_id: besimray/test
|
| 43 |
hub_strategy: checkpoint
|
| 44 |
hub_token: null
|
| 45 |
-
learning_rate:
|
| 46 |
load_in_4bit: false
|
| 47 |
load_in_8bit: true
|
| 48 |
local_rank: null
|
| 49 |
logging_steps: 1
|
| 50 |
-
lora_alpha:
|
| 51 |
lora_dropout: 0.05
|
| 52 |
lora_fan_in_fan_out: null
|
| 53 |
lora_model_dir: null
|
| 54 |
-
lora_r:
|
| 55 |
lora_target_linear: true
|
| 56 |
lr_scheduler: cosine
|
| 57 |
-
max_steps:
|
| 58 |
-
micro_batch_size:
|
| 59 |
mlflow_experiment_name: mhenrichsen/alpaca_2k_test
|
| 60 |
model_type: LlamaForCausalLM
|
| 61 |
-
num_epochs:
|
| 62 |
optimizer: adamw_bnb_8bit
|
| 63 |
output_dir: miner_id_besimray
|
| 64 |
pad_to_sequence_len: false
|
| 65 |
resume_from_checkpoint: null
|
| 66 |
s2_attention: null
|
| 67 |
-
sample_packing:
|
| 68 |
save_steps: 5
|
| 69 |
save_strategy: steps
|
| 70 |
sequence_len: 4096
|
|
@@ -78,7 +79,7 @@ wandb_mode: online
|
|
| 78 |
wandb_project: Public_TuningSN
|
| 79 |
wandb_run: miner_id_24
|
| 80 |
wandb_runid: 383a850e-bb15-45a2-8f4b-fc96eb001a74
|
| 81 |
-
warmup_steps:
|
| 82 |
weight_decay: 0.0
|
| 83 |
xformers_attention: null
|
| 84 |
|
|
@@ -90,7 +91,7 @@ xformers_attention: null
|
|
| 90 |
|
| 91 |
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
|
| 92 |
It achieves the following results on the evaluation set:
|
| 93 |
-
- Loss: 1.
|
| 94 |
|
| 95 |
## Model description
|
| 96 |
|
|
@@ -109,31 +110,32 @@ More information needed
|
|
| 109 |
### Training hyperparameters
|
| 110 |
|
| 111 |
The following hyperparameters were used during training:
|
| 112 |
-
- learning_rate:
|
| 113 |
-
- train_batch_size:
|
| 114 |
-
- eval_batch_size:
|
| 115 |
- seed: 42
|
| 116 |
- gradient_accumulation_steps: 4
|
| 117 |
-
- total_train_batch_size:
|
| 118 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 119 |
- lr_scheduler_type: cosine
|
| 120 |
-
- lr_scheduler_warmup_steps:
|
| 121 |
-
- training_steps:
|
| 122 |
|
| 123 |
### Training results
|
| 124 |
|
| 125 |
| Training Loss | Epoch | Step | Validation Loss |
|
| 126 |
|:-------------:|:------:|:----:|:---------------:|
|
| 127 |
-
| 1.
|
| 128 |
-
| 1.
|
| 129 |
-
| 1.
|
| 130 |
-
| 1.
|
| 131 |
-
| 1.
|
| 132 |
-
| 1.
|
| 133 |
-
| 1.
|
| 134 |
-
| 1.
|
| 135 |
-
| 1.
|
| 136 |
-
| 1.
|
|
|
|
| 137 |
|
| 138 |
|
| 139 |
### Framework versions
|
|
|
|
| 30 |
deepspeed: null
|
| 31 |
early_stopping_patience: null
|
| 32 |
eval_max_new_tokens: 128
|
| 33 |
+
eval_sample_packing: false
|
| 34 |
eval_table_size: null
|
| 35 |
evals_per_epoch: 4
|
| 36 |
flash_attention: true
|
|
|
|
| 43 |
hub_model_id: besimray/test
|
| 44 |
hub_strategy: checkpoint
|
| 45 |
hub_token: null
|
| 46 |
+
learning_rate: 0.0002
|
| 47 |
load_in_4bit: false
|
| 48 |
load_in_8bit: true
|
| 49 |
local_rank: null
|
| 50 |
logging_steps: 1
|
| 51 |
+
lora_alpha: 32
|
| 52 |
lora_dropout: 0.05
|
| 53 |
lora_fan_in_fan_out: null
|
| 54 |
lora_model_dir: null
|
| 55 |
+
lora_r: 16
|
| 56 |
lora_target_linear: true
|
| 57 |
lr_scheduler: cosine
|
| 58 |
+
max_steps: 150
|
| 59 |
+
micro_batch_size: 2
|
| 60 |
mlflow_experiment_name: mhenrichsen/alpaca_2k_test
|
| 61 |
model_type: LlamaForCausalLM
|
| 62 |
+
num_epochs: 3
|
| 63 |
optimizer: adamw_bnb_8bit
|
| 64 |
output_dir: miner_id_besimray
|
| 65 |
pad_to_sequence_len: false
|
| 66 |
resume_from_checkpoint: null
|
| 67 |
s2_attention: null
|
| 68 |
+
sample_packing: true
|
| 69 |
save_steps: 5
|
| 70 |
save_strategy: steps
|
| 71 |
sequence_len: 4096
|
|
|
|
| 79 |
wandb_project: Public_TuningSN
|
| 80 |
wandb_run: miner_id_24
|
| 81 |
wandb_runid: 383a850e-bb15-45a2-8f4b-fc96eb001a74
|
| 82 |
+
warmup_steps: 10
|
| 83 |
weight_decay: 0.0
|
| 84 |
xformers_attention: null
|
| 85 |
|
|
|
|
| 91 |
|
| 92 |
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
|
| 93 |
It achieves the following results on the evaluation set:
|
| 94 |
+
- Loss: 1.1380
|
| 95 |
|
| 96 |
## Model description
|
| 97 |
|
|
|
|
| 110 |
### Training hyperparameters
|
| 111 |
|
| 112 |
The following hyperparameters were used during training:
|
| 113 |
+
- learning_rate: 0.0002
|
| 114 |
+
- train_batch_size: 2
|
| 115 |
+
- eval_batch_size: 2
|
| 116 |
- seed: 42
|
| 117 |
- gradient_accumulation_steps: 4
|
| 118 |
+
- total_train_batch_size: 8
|
| 119 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 120 |
- lr_scheduler_type: cosine
|
| 121 |
+
- lr_scheduler_warmup_steps: 10
|
| 122 |
+
- training_steps: 30
|
| 123 |
|
| 124 |
### Training results
|
| 125 |
|
| 126 |
| Training Loss | Epoch | Step | Validation Loss |
|
| 127 |
|:-------------:|:------:|:----:|:---------------:|
|
| 128 |
+
| 1.3356 | 0.0952 | 1 | 1.2672 |
|
| 129 |
+
| 1.2159 | 0.2857 | 3 | 1.2534 |
|
| 130 |
+
| 1.2716 | 0.5714 | 6 | 1.2098 |
|
| 131 |
+
| 1.2697 | 0.8571 | 9 | 1.1910 |
|
| 132 |
+
| 1.2243 | 1.1190 | 12 | 1.1755 |
|
| 133 |
+
| 1.1981 | 1.4048 | 15 | 1.1604 |
|
| 134 |
+
| 1.1662 | 1.6905 | 18 | 1.1496 |
|
| 135 |
+
| 1.1879 | 1.9762 | 21 | 1.1424 |
|
| 136 |
+
| 1.198 | 2.2381 | 24 | 1.1413 |
|
| 137 |
+
| 1.1981 | 2.5238 | 27 | 1.1411 |
|
| 138 |
+
| 1.1377 | 2.8095 | 30 | 1.1380 |
|
| 139 |
|
| 140 |
|
| 141 |
### Framework versions
|
adapter_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:267858af8019eb7e782699cae15101562b55ccdf93cf3dc7d1d602d0c507ea90
|
| 3 |
+
size 45169354
|