besimray commited on
Commit
b914da0
·
verified ·
1 Parent(s): 1f42ad8

End of training

Browse files
Files changed (2) hide show
  1. README.md +15 -15
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -42,12 +42,12 @@ group_by_length: false
42
  hub_model_id: besimray/test
43
  hub_strategy: checkpoint
44
  hub_token: null
45
- learning_rate: 0.0002
46
  load_in_4bit: false
47
  load_in_8bit: true
48
  local_rank: null
49
  logging_steps: 1
50
- lora_alpha: 32
51
  lora_dropout: 0.05
52
  lora_fan_in_fan_out: null
53
  lora_model_dir: null
@@ -61,7 +61,7 @@ model_type: LlamaForCausalLM
61
  num_epochs: 4
62
  optimizer: adamw_bnb_8bit
63
  output_dir: miner_id_besimray
64
- pad_to_sequence_len: true
65
  resume_from_checkpoint: null
66
  s2_attention: null
67
  sample_packing: false
@@ -90,7 +90,7 @@ xformers_attention: null
90
 
91
  This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
92
  It achieves the following results on the evaluation set:
93
- - Loss: 1.2052
94
 
95
  ## Model description
96
 
@@ -109,7 +109,7 @@ More information needed
109
  ### Training hyperparameters
110
 
111
  The following hyperparameters were used during training:
112
- - learning_rate: 0.0002
113
  - train_batch_size: 7
114
  - eval_batch_size: 7
115
  - seed: 42
@@ -124,16 +124,16 @@ The following hyperparameters were used during training:
124
 
125
  | Training Loss | Epoch | Step | Validation Loss |
126
  |:-------------:|:------:|:----:|:---------------:|
127
- | 1.3357 | 0.0147 | 1 | 1.2696 |
128
- | 1.1846 | 0.0294 | 2 | 1.2671 |
129
- | 1.5764 | 0.0441 | 3 | 1.2609 |
130
- | 1.3116 | 0.0588 | 4 | 1.2478 |
131
- | 1.3583 | 0.0735 | 5 | 1.2308 |
132
- | 1.3894 | 0.0882 | 6 | 1.2229 |
133
- | 1.243 | 0.1029 | 7 | 1.2255 |
134
- | 1.4176 | 0.1176 | 8 | 1.2249 |
135
- | 1.3973 | 0.1324 | 9 | 1.2156 |
136
- | 1.3676 | 0.1471 | 10 | 1.2052 |
137
 
138
 
139
  ### Framework versions
 
42
  hub_model_id: besimray/test
43
  hub_strategy: checkpoint
44
  hub_token: null
45
+ learning_rate: 5.0e-05
46
  load_in_4bit: false
47
  load_in_8bit: true
48
  local_rank: null
49
  logging_steps: 1
50
+ lora_alpha: 64
51
  lora_dropout: 0.05
52
  lora_fan_in_fan_out: null
53
  lora_model_dir: null
 
61
  num_epochs: 4
62
  optimizer: adamw_bnb_8bit
63
  output_dir: miner_id_besimray
64
+ pad_to_sequence_len: false
65
  resume_from_checkpoint: null
66
  s2_attention: null
67
  sample_packing: false
 
90
 
91
  This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
92
  It achieves the following results on the evaluation set:
93
+ - Loss: 1.2202
94
 
95
  ## Model description
96
 
 
109
  ### Training hyperparameters
110
 
111
  The following hyperparameters were used during training:
112
+ - learning_rate: 5e-05
113
  - train_batch_size: 7
114
  - eval_batch_size: 7
115
  - seed: 42
 
124
 
125
  | Training Loss | Epoch | Step | Validation Loss |
126
  |:-------------:|:------:|:----:|:---------------:|
127
+ | 1.3327 | 0.0147 | 1 | 1.2694 |
128
+ | 1.1887 | 0.0294 | 2 | 1.2705 |
129
+ | 1.5717 | 0.0441 | 3 | 1.2656 |
130
+ | 1.3113 | 0.0588 | 4 | 1.2619 |
131
+ | 1.3671 | 0.0735 | 5 | 1.2536 |
132
+ | 1.4151 | 0.0882 | 6 | 1.2436 |
133
+ | 1.2607 | 0.1029 | 7 | 1.2301 |
134
+ | 1.4189 | 0.1176 | 8 | 1.2256 |
135
+ | 1.3843 | 0.1324 | 9 | 1.2237 |
136
+ | 1.3753 | 0.1471 | 10 | 1.2202 |
137
 
138
 
139
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:04395dced9684a18e5d52dd2a6c1bec536ac3497b627a50216c11ca4ded14e18
3
  size 67713738
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2295b41ac661bb1f048c5ea31fe90887943731c0624ee1b94ce0b0510bf55c3
3
  size 67713738