Peaky8linders commited on
Commit
22de27a
·
verified ·
1 Parent(s): a9cb3ce

End of training

Browse files
Files changed (2) hide show
  1. README.md +157 -3
  2. adapter_model.bin +3 -0
README.md CHANGED
@@ -1,3 +1,157 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
3
+ library_name: peft
4
+ license: apache-2.0
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: isafpr-tiny-llama-lora
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.1`
20
+ ```yaml
21
+ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
22
+ model_type: LlamaForCausalLM
23
+ tokenizer_type: LlamaTokenizer
24
+
25
+ load_in_8bit: true
26
+ load_in_4bit: false
27
+ strict: false
28
+
29
+ data_seed: 2606
30
+ seed: 2606
31
+
32
+ datasets:
33
+ - path: data/templatefree_isaf_press_releases_ft_train.jsonl
34
+ type: input_output
35
+ dataset_prepared_path:
36
+ val_set_size: 0.1
37
+ output_dir: tiny-llama/lora-out
38
+ hub_model_id: Peaky8linders/isafpr-tiny-llama-lora
39
+
40
+ sequence_len: 4096
41
+ sample_packing: true
42
+ eval_sample_packing: false
43
+ pad_to_sequence_len: true
44
+
45
+ adapter: lora
46
+ lora_model_dir:
47
+ lora_r: 32
48
+ lora_alpha: 16
49
+ lora_dropout: 0.05
50
+ lora_target_linear: true
51
+ lora_fan_in_fan_out:
52
+
53
+ wandb_project:
54
+ wandb_entity:
55
+ wandb_watch:
56
+ wandb_name:
57
+ wandb_log_model:
58
+
59
+ gradient_accumulation_steps: 4
60
+ micro_batch_size: 2
61
+ num_epochs: 4
62
+ optimizer: adamw_bnb_8bit
63
+ lr_scheduler: cosine
64
+ learning_rate: 0.0002
65
+
66
+ train_on_inputs: false
67
+ group_by_length: false
68
+ bf16: auto
69
+ fp16:
70
+ tf32: false
71
+
72
+ gradient_checkpointing: true
73
+ early_stopping_patience:
74
+ resume_from_checkpoint:
75
+ local_rank:
76
+ logging_steps: 1
77
+ xformers_attention:
78
+ flash_attention: true
79
+
80
+ warmup_steps: 10
81
+ evals_per_epoch: 4
82
+ saves_per_epoch: 1
83
+ debug:
84
+ deepspeed:
85
+ weight_decay: 0.0
86
+ fsdp:
87
+ fsdp_config:
88
+ special_tokens:
89
+
90
+ ```
91
+
92
+ </details><br>
93
+
94
+ # isafpr-tiny-llama-lora
95
+
96
+ This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset.
97
+ It achieves the following results on the evaluation set:
98
+ - Loss: 0.0395
99
+
100
+ ## Model description
101
+
102
+ More information needed
103
+
104
+ ## Intended uses & limitations
105
+
106
+ More information needed
107
+
108
+ ## Training and evaluation data
109
+
110
+ More information needed
111
+
112
+ ## Training procedure
113
+
114
+ ### Training hyperparameters
115
+
116
+ The following hyperparameters were used during training:
117
+ - learning_rate: 0.0002
118
+ - train_batch_size: 2
119
+ - eval_batch_size: 2
120
+ - seed: 2606
121
+ - gradient_accumulation_steps: 4
122
+ - total_train_batch_size: 8
123
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
124
+ - lr_scheduler_type: cosine
125
+ - lr_scheduler_warmup_steps: 10
126
+ - num_epochs: 4
127
+
128
+ ### Training results
129
+
130
+ | Training Loss | Epoch | Step | Validation Loss |
131
+ |:-------------:|:------:|:----:|:---------------:|
132
+ | 1.7938 | 0.0138 | 1 | 1.7961 |
133
+ | 0.2755 | 0.2483 | 18 | 0.2099 |
134
+ | 0.0937 | 0.4966 | 36 | 0.0798 |
135
+ | 0.0625 | 0.7448 | 54 | 0.0646 |
136
+ | 0.0507 | 0.9931 | 72 | 0.0581 |
137
+ | 0.0466 | 1.2138 | 90 | 0.0516 |
138
+ | 0.0391 | 1.4621 | 108 | 0.0485 |
139
+ | 0.0534 | 1.7103 | 126 | 0.0457 |
140
+ | 0.0611 | 1.9586 | 144 | 0.0439 |
141
+ | 0.0281 | 2.1793 | 162 | 0.0434 |
142
+ | 0.0382 | 2.4276 | 180 | 0.0416 |
143
+ | 0.031 | 2.6759 | 198 | 0.0407 |
144
+ | 0.0278 | 2.9241 | 216 | 0.0400 |
145
+ | 0.0377 | 3.1448 | 234 | 0.0397 |
146
+ | 0.0247 | 3.3931 | 252 | 0.0400 |
147
+ | 0.0419 | 3.6414 | 270 | 0.0395 |
148
+ | 0.0273 | 3.8897 | 288 | 0.0395 |
149
+
150
+
151
+ ### Framework versions
152
+
153
+ - PEFT 0.11.1
154
+ - Transformers 4.41.1
155
+ - Pytorch 2.1.2+cu118
156
+ - Datasets 2.19.1
157
+ - Tokenizers 0.19.1
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24dbd44e3cbb1b2a7d811837614370f63d61f553cf63cedd66646e00c9600ee8
3
+ size 101036698