AiAF commited on
Commit
701c03e
·
verified ·
1 Parent(s): 01783d7

End of training

Browse files
Files changed (2) hide show
  1. README.md +167 -0
  2. generation_config.json +7 -0
README.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: mistralai/Mistral-7B-v0.1
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - json
10
+ model-index:
11
+ - name: UFOs-Finetune-V1
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.6.0`
22
+ ```yaml
23
+ base_model: mistralai/Mistral-7B-v0.1
24
+ # optionally might have model_type or tokenizer_type
25
+ model_type: MistralForCausalLM
26
+ tokenizer_type: LlamaTokenizer
27
+ # Automatically upload checkpoint and final model to HF
28
+ hub_model_id: AiAF/UFOs-Finetune-V1
29
+
30
+ load_in_8bit: false
31
+ load_in_4bit: false
32
+ strict: false
33
+
34
+ datasets:
35
+ - path: json
36
+ data_files: plain_qa_list.jsonl
37
+ ds_type: json
38
+ type: chat_template
39
+ chat_template: chatml
40
+ field_messages: conversations
41
+ message_field_role: from
42
+ message_field_content: value
43
+ roles:
44
+ user:
45
+ - human
46
+ assistant:
47
+ - gpt
48
+ system:
49
+ - system
50
+
51
+ dataset_prepared_path:
52
+ val_set_size: 0.05
53
+ output_dir: ./outputs/UFOs-Finetune-V1/out
54
+
55
+ sequence_len: 8192
56
+ sample_packing: true
57
+ pad_to_sequence_len: true
58
+ eval_sample_packing: false
59
+
60
+ max_steps: 100000
61
+
62
+ wandb_project: "UFO_LLM_Finetune"
63
+ wandb_entity:
64
+ wandb_watch: "all"
65
+ wandb_name: "UFO_LLM_Finetune-V1"
66
+ wandb_log_model: "false"
67
+
68
+ gradient_accumulation_steps: 4
69
+ micro_batch_size: 2
70
+ num_epochs: 10
71
+ optimizer: adamw_bnb_8bit
72
+ lr_scheduler: cosine
73
+ learning_rate: 0.000005
74
+
75
+ train_on_inputs: false
76
+ group_by_length: false
77
+ bf16: auto
78
+ fp16:
79
+ tf32: false
80
+
81
+ gradient_checkpointing: true
82
+ early_stopping_patience:
83
+ resume_from_checkpoint: /workspace/axolotl/outputs/out/checkpoint-18
84
+ local_rank:
85
+ logging_steps: 1
86
+ xformers_attention:
87
+ flash_attention: true
88
+
89
+ warmup_steps: 10
90
+ evals_per_epoch: 4
91
+ eval_table_size:
92
+ eval_max_new_tokens: 128
93
+ saves_per_epoch: 1
94
+ debug:
95
+ deepspeed:
96
+ weight_decay: 0.0
97
+ fsdp:
98
+ fsdp_config:
99
+ special_tokens:
100
+
101
+ ```
102
+
103
+ </details><br>
104
+
105
+ # UFOs-Finetune-V1
106
+
107
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the json dataset.
108
+ It achieves the following results on the evaluation set:
109
+ - Loss: 1.3935
110
+
111
+ ## Model description
112
+
113
+ More information needed
114
+
115
+ ## Intended uses & limitations
116
+
117
+ More information needed
118
+
119
+ ## Training and evaluation data
120
+
121
+ More information needed
122
+
123
+ ## Training procedure
124
+
125
+ ### Training hyperparameters
126
+
127
+ The following hyperparameters were used during training:
128
+ - learning_rate: 5e-06
129
+ - train_batch_size: 2
130
+ - eval_batch_size: 2
131
+ - seed: 42
132
+ - gradient_accumulation_steps: 4
133
+ - total_train_batch_size: 8
134
+ - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
135
+ - lr_scheduler_type: cosine
136
+ - lr_scheduler_warmup_steps: 10
137
+ - training_steps: 50
138
+
139
+ ### Training results
140
+
141
+ | Training Loss | Epoch | Step | Validation Loss |
142
+ |:-------------:|:------:|:----:|:---------------:|
143
+ | 1.7686 | 0.1111 | 1 | 1.6895 |
144
+ | 2.0582 | 0.3333 | 3 | 1.6884 |
145
+ | 1.9135 | 0.6667 | 6 | 1.6793 |
146
+ | 1.8261 | 1.0 | 9 | 1.6667 |
147
+ | 1.8757 | 1.3333 | 12 | 1.6570 |
148
+ | 1.8754 | 1.6667 | 15 | 1.6501 |
149
+ | 1.8426 | 2.0 | 18 | 1.6468 |
150
+ | 2.8515 | 4.1739 | 21 | 1.4353 |
151
+ | 1.3702 | 4.6957 | 24 | 1.4068 |
152
+ | 1.2889 | 5.1739 | 27 | 1.3909 |
153
+ | 1.2635 | 5.6957 | 30 | 1.3870 |
154
+ | 1.2139 | 6.1739 | 33 | 1.3874 |
155
+ | 1.1786 | 6.6957 | 36 | 1.3895 |
156
+ | 1.1458 | 7.1739 | 39 | 1.3921 |
157
+ | 1.1389 | 7.6957 | 42 | 1.3929 |
158
+ | 1.1255 | 8.1739 | 45 | 1.3934 |
159
+ | 1.1589 | 8.6957 | 48 | 1.3935 |
160
+
161
+
162
+ ### Framework versions
163
+
164
+ - Transformers 4.48.3
165
+ - Pytorch 2.5.1+cu124
166
+ - Datasets 3.2.0
167
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "do_sample": true,
5
+ "eos_token_id": 2,
6
+ "transformers_version": "4.48.3"
7
+ }