eduard76 commited on
Commit
a85dc29
·
verified ·
1 Parent(s): 883bdff

Upload fine-tuned stability-qwen-7b-lora

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-7B-Instruct
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:Qwen/Qwen2.5-7B-Instruct
7
+ - lora
8
+ - transformers
9
+ ---
10
+
11
+ # Model Card for Model ID
12
+
13
+ <!-- Provide a quick summary of what the model is/does. -->
14
+
15
+
16
+
17
+ ## Model Details
18
+
19
+ ### Model Description
20
+
21
+ <!-- Provide a longer summary of what this model is. -->
22
+
23
+
24
+
25
+ - **Developed by:** [More Information Needed]
26
+ - **Funded by [optional]:** [More Information Needed]
27
+ - **Shared by [optional]:** [More Information Needed]
28
+ - **Model type:** [More Information Needed]
29
+ - **Language(s) (NLP):** [More Information Needed]
30
+ - **License:** [More Information Needed]
31
+ - **Finetuned from model [optional]:** [More Information Needed]
32
+
33
+ ### Model Sources [optional]
34
+
35
+ <!-- Provide the basic links for the model. -->
36
+
37
+ - **Repository:** [More Information Needed]
38
+ - **Paper [optional]:** [More Information Needed]
39
+ - **Demo [optional]:** [More Information Needed]
40
+
41
+ ## Uses
42
+
43
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
+
45
+ ### Direct Use
46
+
47
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
48
+
49
+ [More Information Needed]
50
+
51
+ ### Downstream Use [optional]
52
+
53
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
54
+
55
+ [More Information Needed]
56
+
57
+ ### Out-of-Scope Use
58
+
59
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ## Bias, Risks, and Limitations
64
+
65
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
66
+
67
+ [More Information Needed]
68
+
69
+ ### Recommendations
70
+
71
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
72
+
73
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
74
+
75
+ ## How to Get Started with the Model
76
+
77
+ Use the code below to get started with the model.
78
+
79
+ [More Information Needed]
80
+
81
+ ## Training Details
82
+
83
+ ### Training Data
84
+
85
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
86
+
87
+ [More Information Needed]
88
+
89
+ ### Training Procedure
90
+
91
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
92
+
93
+ #### Preprocessing [optional]
94
+
95
+ [More Information Needed]
96
+
97
+
98
+ #### Training Hyperparameters
99
+
100
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
101
+
102
+ #### Speeds, Sizes, Times [optional]
103
+
104
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
105
+
106
+ [More Information Needed]
107
+
108
+ ## Evaluation
109
+
110
+ <!-- This section describes the evaluation protocols and provides the results. -->
111
+
112
+ ### Testing Data, Factors & Metrics
113
+
114
+ #### Testing Data
115
+
116
+ <!-- This should link to a Dataset Card if possible. -->
117
+
118
+ [More Information Needed]
119
+
120
+ #### Factors
121
+
122
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
123
+
124
+ [More Information Needed]
125
+
126
+ #### Metrics
127
+
128
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
129
+
130
+ [More Information Needed]
131
+
132
+ ### Results
133
+
134
+ [More Information Needed]
135
+
136
+ #### Summary
137
+
138
+
139
+
140
+ ## Model Examination [optional]
141
+
142
+ <!-- Relevant interpretability work for the model goes here -->
143
+
144
+ [More Information Needed]
145
+
146
+ ## Environmental Impact
147
+
148
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
149
+
150
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
151
+
152
+ - **Hardware Type:** [More Information Needed]
153
+ - **Hours used:** [More Information Needed]
154
+ - **Cloud Provider:** [More Information Needed]
155
+ - **Compute Region:** [More Information Needed]
156
+ - **Carbon Emitted:** [More Information Needed]
157
+
158
+ ## Technical Specifications [optional]
159
+
160
+ ### Model Architecture and Objective
161
+
162
+ [More Information Needed]
163
+
164
+ ### Compute Infrastructure
165
+
166
+ [More Information Needed]
167
+
168
+ #### Hardware
169
+
170
+ [More Information Needed]
171
+
172
+ #### Software
173
+
174
+ [More Information Needed]
175
+
176
+ ## Citation [optional]
177
+
178
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
179
+
180
+ **BibTeX:**
181
+
182
+ [More Information Needed]
183
+
184
+ **APA:**
185
+
186
+ [More Information Needed]
187
+
188
+ ## Glossary [optional]
189
+
190
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
191
+
192
+ [More Information Needed]
193
+
194
+ ## More Information [optional]
195
+
196
+ [More Information Needed]
197
+
198
+ ## Model Card Authors [optional]
199
+
200
+ [More Information Needed]
201
+
202
+ ## Model Card Contact
203
+
204
+ [More Information Needed]
205
+ ### Framework versions
206
+
207
+ - PEFT 0.17.1
adapter_config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "Qwen/Qwen2.5-7B-Instruct",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 128,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.05,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "qalora_group_size": 16,
24
+ "r": 64,
25
+ "rank_pattern": {},
26
+ "revision": null,
27
+ "target_modules": [
28
+ "v_proj",
29
+ "k_proj",
30
+ "o_proj",
31
+ "q_proj"
32
+ ],
33
+ "target_parameters": null,
34
+ "task_type": "CAUSAL_LM",
35
+ "trainable_token_indices": null,
36
+ "use_dora": false,
37
+ "use_qalora": false,
38
+ "use_rslora": false
39
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06b934fa3fea4fddf948136f48cca87ea65c60c599d2c9e34ae2211375555c3c
3
+ size 161510984
added_tokens.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|file_sep|>": 151664,
8
+ "<|fim_middle|>": 151660,
9
+ "<|fim_pad|>": 151662,
10
+ "<|fim_prefix|>": 151659,
11
+ "<|fim_suffix|>": 151661,
12
+ "<|im_end|>": 151645,
13
+ "<|im_start|>": 151644,
14
+ "<|image_pad|>": 151655,
15
+ "<|object_ref_end|>": 151647,
16
+ "<|object_ref_start|>": 151646,
17
+ "<|quad_end|>": 151651,
18
+ "<|quad_start|>": 151650,
19
+ "<|repo_name|>": 151663,
20
+ "<|video_pad|>": 151656,
21
+ "<|vision_end|>": 151653,
22
+ "<|vision_pad|>": 151654,
23
+ "<|vision_start|>": 151652
24
+ }
chat_template.jinja ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0]['role'] == 'system' %}
4
+ {{- messages[0]['content'] }}
5
+ {%- else %}
6
+ {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}
7
+ {%- endif %}
8
+ {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
9
+ {%- for tool in tools %}
10
+ {{- "\n" }}
11
+ {{- tool | tojson }}
12
+ {%- endfor %}
13
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
14
+ {%- else %}
15
+ {%- if messages[0]['role'] == 'system' %}
16
+ {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
17
+ {%- else %}
18
+ {{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
19
+ {%- endif %}
20
+ {%- endif %}
21
+ {%- for message in messages %}
22
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
23
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
24
+ {%- elif message.role == "assistant" %}
25
+ {{- '<|im_start|>' + message.role }}
26
+ {%- if message.content %}
27
+ {{- '\n' + message.content }}
28
+ {%- endif %}
29
+ {%- for tool_call in message.tool_calls %}
30
+ {%- if tool_call.function is defined %}
31
+ {%- set tool_call = tool_call.function %}
32
+ {%- endif %}
33
+ {{- '\n<tool_call>\n{"name": "' }}
34
+ {{- tool_call.name }}
35
+ {{- '", "arguments": ' }}
36
+ {{- tool_call.arguments | tojson }}
37
+ {{- '}\n</tool_call>' }}
38
+ {%- endfor %}
39
+ {{- '<|im_end|>\n' }}
40
+ {%- elif message.role == "tool" %}
41
+ {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
42
+ {{- '<|im_start|>user' }}
43
+ {%- endif %}
44
+ {{- '\n<tool_response>\n' }}
45
+ {{- message.content }}
46
+ {{- '\n</tool_response>' }}
47
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
48
+ {{- '<|im_end|>\n' }}
49
+ {%- endif %}
50
+ {%- endif %}
51
+ {%- endfor %}
52
+ {%- if add_generation_prompt %}
53
+ {{- '<|im_start|>assistant\n' }}
54
+ {%- endif %}
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:694f1174c5bdf94e2fc50796c0f1733a5a3945ff110b0dfa40ea0701cc9c9c42
3
+ size 11422176
tokenizer_config.json ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "clean_up_tokenization_spaces": false,
199
+ "eos_token": "<|im_end|>",
200
+ "errors": "replace",
201
+ "extra_special_tokens": {},
202
+ "model_max_length": 131072,
203
+ "pad_token": "<|endoftext|>",
204
+ "padding_side": "right",
205
+ "split_special_tokens": false,
206
+ "tokenizer_class": "Qwen2Tokenizer",
207
+ "unk_token": null
208
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a30be5366889eb52b7ab44e9ce6cf9e2b29d39a14dedfd5ffe44c2d862a0e7d2
3
+ size 5905
training_metrics.json ADDED
@@ -0,0 +1,560 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "loss": 1.3328,
4
+ "grad_norm": 0.6021145582199097,
5
+ "learning_rate": 7.82608695652174e-05,
6
+ "epoch": 0.04,
7
+ "step": 10
8
+ },
9
+ {
10
+ "loss": 0.705,
11
+ "grad_norm": 0.6304188370704651,
12
+ "learning_rate": 0.00016521739130434784,
13
+ "epoch": 0.08,
14
+ "step": 20
15
+ },
16
+ {
17
+ "loss": 0.2202,
18
+ "grad_norm": 0.3915063440799713,
19
+ "learning_rate": 0.00019996638918070336,
20
+ "epoch": 0.12,
21
+ "step": 30
22
+ },
23
+ {
24
+ "loss": 0.1394,
25
+ "grad_norm": 0.13246139883995056,
26
+ "learning_rate": 0.0001997610715447061,
27
+ "epoch": 0.16,
28
+ "step": 40
29
+ },
30
+ {
31
+ "loss": 0.1246,
32
+ "grad_norm": 0.08913616836071014,
33
+ "learning_rate": 0.0001993694918299864,
34
+ "epoch": 0.2,
35
+ "step": 50
36
+ },
37
+ {
38
+ "loss": 0.1246,
39
+ "grad_norm": 0.15215232968330383,
40
+ "learning_rate": 0.00019879238114789373,
41
+ "epoch": 0.24,
42
+ "step": 60
43
+ },
44
+ {
45
+ "loss": 0.1224,
46
+ "grad_norm": 0.09965867549180984,
47
+ "learning_rate": 0.0001980308170112659,
48
+ "epoch": 0.28,
49
+ "step": 70
50
+ },
51
+ {
52
+ "loss": 0.1206,
53
+ "grad_norm": 0.044484540820121765,
54
+ "learning_rate": 0.0001970862213226244,
55
+ "epoch": 0.32,
56
+ "step": 80
57
+ },
58
+ {
59
+ "loss": 0.1207,
60
+ "grad_norm": 0.045077353715896606,
61
+ "learning_rate": 0.00019596035771936592,
62
+ "epoch": 0.36,
63
+ "step": 90
64
+ },
65
+ {
66
+ "loss": 0.1204,
67
+ "grad_norm": 0.05069291591644287,
68
+ "learning_rate": 0.00019465532828090735,
69
+ "epoch": 0.4,
70
+ "step": 100
71
+ },
72
+ {
73
+ "loss": 0.118,
74
+ "grad_norm": 0.07815514504909515,
75
+ "learning_rate": 0.00019317356960393156,
76
+ "epoch": 0.44,
77
+ "step": 110
78
+ },
79
+ {
80
+ "loss": 0.1193,
81
+ "grad_norm": 0.04100440815091133,
82
+ "learning_rate": 0.00019151784825306205,
83
+ "epoch": 0.48,
84
+ "step": 120
85
+ },
86
+ {
87
+ "loss": 0.1188,
88
+ "grad_norm": 0.04727301374077797,
89
+ "learning_rate": 0.00018969125559546054,
90
+ "epoch": 0.52,
91
+ "step": 130
92
+ },
93
+ {
94
+ "loss": 0.1177,
95
+ "grad_norm": 0.03562536835670471,
96
+ "learning_rate": 0.00018769720202899194,
97
+ "epoch": 0.56,
98
+ "step": 140
99
+ },
100
+ {
101
+ "loss": 0.118,
102
+ "grad_norm": 0.06143300607800484,
103
+ "learning_rate": 0.00018553941061473218,
104
+ "epoch": 0.6,
105
+ "step": 150
106
+ },
107
+ {
108
+ "loss": 0.1189,
109
+ "grad_norm": 0.032555170357227325,
110
+ "learning_rate": 0.00018322191012570919,
111
+ "epoch": 0.64,
112
+ "step": 160
113
+ },
114
+ {
115
+ "loss": 0.1179,
116
+ "grad_norm": 0.03246942535042763,
117
+ "learning_rate": 0.0001807490275248539,
118
+ "epoch": 0.68,
119
+ "step": 170
120
+ },
121
+ {
122
+ "loss": 0.1169,
123
+ "grad_norm": 0.028580008074641228,
124
+ "learning_rate": 0.00017812537988620675,
125
+ "epoch": 0.72,
126
+ "step": 180
127
+ },
128
+ {
129
+ "loss": 0.1165,
130
+ "grad_norm": 0.028096111491322517,
131
+ "learning_rate": 0.00017535586577446276,
132
+ "epoch": 0.76,
133
+ "step": 190
134
+ },
135
+ {
136
+ "loss": 0.1184,
137
+ "grad_norm": 0.024758173152804375,
138
+ "learning_rate": 0.00017244565609895074,
139
+ "epoch": 0.8,
140
+ "step": 200
141
+ },
142
+ {
143
+ "loss": 0.1185,
144
+ "grad_norm": 0.03368006646633148,
145
+ "learning_rate": 0.00016940018445912272,
146
+ "epoch": 0.84,
147
+ "step": 210
148
+ },
149
+ {
150
+ "loss": 0.12,
151
+ "grad_norm": 0.02795102261006832,
152
+ "learning_rate": 0.00016622513699957948,
153
+ "epoch": 0.88,
154
+ "step": 220
155
+ },
156
+ {
157
+ "loss": 0.1171,
158
+ "grad_norm": 0.0273590087890625,
159
+ "learning_rate": 0.00016292644179357336,
160
+ "epoch": 0.92,
161
+ "step": 230
162
+ },
163
+ {
164
+ "loss": 0.1182,
165
+ "grad_norm": 0.025100810453295708,
166
+ "learning_rate": 0.00015951025777481096,
167
+ "epoch": 0.96,
168
+ "step": 240
169
+ },
170
+ {
171
+ "loss": 0.1172,
172
+ "grad_norm": 0.022085770964622498,
173
+ "learning_rate": 0.00015598296323822024,
174
+ "epoch": 1.0,
175
+ "step": 250
176
+ },
177
+ {
178
+ "eval_loss": 0.1171068549156189,
179
+ "eval_runtime": 114.5086,
180
+ "eval_samples_per_second": 4.366,
181
+ "eval_steps_per_second": 1.092,
182
+ "epoch": 1.0,
183
+ "step": 250
184
+ },
185
+ {
186
+ "loss": 0.1178,
187
+ "grad_norm": 0.021562082692980766,
188
+ "learning_rate": 0.00015235114393115202,
189
+ "epoch": 1.04,
190
+ "step": 260
191
+ },
192
+ {
193
+ "loss": 0.1187,
194
+ "grad_norm": 0.020716339349746704,
195
+ "learning_rate": 0.0001486215807572515,
196
+ "epoch": 1.08,
197
+ "step": 270
198
+ },
199
+ {
200
+ "loss": 0.1197,
201
+ "grad_norm": 0.02151365764439106,
202
+ "learning_rate": 0.00014480123711595636,
203
+ "epoch": 1.12,
204
+ "step": 280
205
+ },
206
+ {
207
+ "loss": 0.1181,
208
+ "grad_norm": 0.025096895173192024,
209
+ "learning_rate": 0.0001408972459012606,
210
+ "epoch": 1.16,
211
+ "step": 290
212
+ },
213
+ {
214
+ "loss": 0.1198,
215
+ "grad_norm": 0.028290871530771255,
216
+ "learning_rate": 0.00013691689618401835,
217
+ "epoch": 1.2,
218
+ "step": 300
219
+ },
220
+ {
221
+ "loss": 0.117,
222
+ "grad_norm": 0.02138497494161129,
223
+ "learning_rate": 0.00013286761960265214,
224
+ "epoch": 1.24,
225
+ "step": 310
226
+ },
227
+ {
228
+ "loss": 0.118,
229
+ "grad_norm": 0.022952184081077576,
230
+ "learning_rate": 0.00012875697648767663,
231
+ "epoch": 1.28,
232
+ "step": 320
233
+ },
234
+ {
235
+ "loss": 0.1186,
236
+ "grad_norm": 0.029631519690155983,
237
+ "learning_rate": 0.00012459264174594304,
238
+ "epoch": 1.32,
239
+ "step": 330
240
+ },
241
+ {
242
+ "loss": 0.1161,
243
+ "grad_norm": 0.022406576201319695,
244
+ "learning_rate": 0.00012038239053096038,
245
+ "epoch": 1.3599999999999999,
246
+ "step": 340
247
+ },
248
+ {
249
+ "loss": 0.1195,
250
+ "grad_norm": 0.020932380110025406,
251
+ "learning_rate": 0.00011613408372604825,
252
+ "epoch": 1.4,
253
+ "step": 350
254
+ },
255
+ {
256
+ "loss": 0.1168,
257
+ "grad_norm": 0.023278173059225082,
258
+ "learning_rate": 0.00011185565326742473,
259
+ "epoch": 1.44,
260
+ "step": 360
261
+ },
262
+ {
263
+ "loss": 0.1182,
264
+ "grad_norm": 0.01878235675394535,
265
+ "learning_rate": 0.00010755508733463265,
266
+ "epoch": 1.48,
267
+ "step": 370
268
+ },
269
+ {
270
+ "loss": 0.1191,
271
+ "grad_norm": 0.020206836983561516,
272
+ "learning_rate": 0.00010324041543595535,
273
+ "epoch": 1.52,
274
+ "step": 380
275
+ },
276
+ {
277
+ "loss": 0.1177,
278
+ "grad_norm": 0.02427123673260212,
279
+ "learning_rate": 9.891969341666809e-05,
280
+ "epoch": 1.56,
281
+ "step": 390
282
+ },
283
+ {
284
+ "loss": 0.1162,
285
+ "grad_norm": 0.02717430144548416,
286
+ "learning_rate": 9.460098841811601e-05,
287
+ "epoch": 1.6,
288
+ "step": 400
289
+ },
290
+ {
291
+ "loss": 0.1163,
292
+ "grad_norm": 0.021503793075680733,
293
+ "learning_rate": 9.029236381570161e-05,
294
+ "epoch": 1.6400000000000001,
295
+ "step": 410
296
+ },
297
+ {
298
+ "loss": 0.1173,
299
+ "grad_norm": 0.018184779211878777,
300
+ "learning_rate": 8.600186416390342e-05,
301
+ "epoch": 1.6800000000000002,
302
+ "step": 420
303
+ },
304
+ {
305
+ "loss": 0.1154,
306
+ "grad_norm": 0.019028829410672188,
307
+ "learning_rate": 8.173750017643504e-05,
308
+ "epoch": 1.72,
309
+ "step": 430
310
+ },
311
+ {
312
+ "loss": 0.1182,
313
+ "grad_norm": 0.022410472854971886,
314
+ "learning_rate": 7.750723376958733e-05,
315
+ "epoch": 1.76,
316
+ "step": 440
317
+ },
318
+ {
319
+ "loss": 0.1165,
320
+ "grad_norm": 0.03040890395641327,
321
+ "learning_rate": 7.33189631966799e-05,
322
+ "epoch": 1.8,
323
+ "step": 450
324
+ },
325
+ {
326
+ "loss": 0.1186,
327
+ "grad_norm": 0.023966152220964432,
328
+ "learning_rate": 6.918050830137609e-05,
329
+ "epoch": 1.8399999999999999,
330
+ "step": 460
331
+ },
332
+ {
333
+ "loss": 0.1183,
334
+ "grad_norm": 0.0278841033577919,
335
+ "learning_rate": 6.509959591739522e-05,
336
+ "epoch": 1.88,
337
+ "step": 470
338
+ },
339
+ {
340
+ "loss": 0.1167,
341
+ "grad_norm": 0.022951366379857063,
342
+ "learning_rate": 6.10838454418825e-05,
343
+ "epoch": 1.92,
344
+ "step": 480
345
+ },
346
+ {
347
+ "loss": 0.1162,
348
+ "grad_norm": 0.01736384816467762,
349
+ "learning_rate": 5.714075460937125e-05,
350
+ "epoch": 1.96,
351
+ "step": 490
352
+ },
353
+ {
354
+ "loss": 0.1183,
355
+ "grad_norm": 0.019332880154252052,
356
+ "learning_rate": 5.327768549289934e-05,
357
+ "epoch": 2.0,
358
+ "step": 500
359
+ },
360
+ {
361
+ "eval_loss": 0.11685916036367416,
362
+ "eval_runtime": 114.501,
363
+ "eval_samples_per_second": 4.367,
364
+ "eval_steps_per_second": 1.092,
365
+ "epoch": 2.0,
366
+ "step": 500
367
+ },
368
+ {
369
+ "loss": 0.1196,
370
+ "grad_norm": 0.0202798955142498,
371
+ "learning_rate": 4.9501850758417056e-05,
372
+ "epoch": 2.04,
373
+ "step": 510
374
+ },
375
+ {
376
+ "loss": 0.119,
377
+ "grad_norm": 0.01924232393503189,
378
+ "learning_rate": 4.582030019814948e-05,
379
+ "epoch": 2.08,
380
+ "step": 520
381
+ },
382
+ {
383
+ "loss": 0.1167,
384
+ "grad_norm": 0.02831628918647766,
385
+ "learning_rate": 4.223990756805841e-05,
386
+ "epoch": 2.12,
387
+ "step": 530
388
+ },
389
+ {
390
+ "loss": 0.1175,
391
+ "grad_norm": 0.021440183743834496,
392
+ "learning_rate": 3.8767357753977596e-05,
393
+ "epoch": 2.16,
394
+ "step": 540
395
+ },
396
+ {
397
+ "loss": 0.118,
398
+ "grad_norm": 0.026763591915369034,
399
+ "learning_rate": 3.540913429038407e-05,
400
+ "epoch": 2.2,
401
+ "step": 550
402
+ },
403
+ {
404
+ "loss": 0.1158,
405
+ "grad_norm": 0.02004193514585495,
406
+ "learning_rate": 3.217150725510946e-05,
407
+ "epoch": 2.24,
408
+ "step": 560
409
+ },
410
+ {
411
+ "loss": 0.1163,
412
+ "grad_norm": 0.022136256098747253,
413
+ "learning_rate": 2.9060521562591624e-05,
414
+ "epoch": 2.2800000000000002,
415
+ "step": 570
416
+ },
417
+ {
418
+ "loss": 0.116,
419
+ "grad_norm": 0.022931700572371483,
420
+ "learning_rate": 2.608198567752512e-05,
421
+ "epoch": 2.32,
422
+ "step": 580
423
+ },
424
+ {
425
+ "loss": 0.1161,
426
+ "grad_norm": 0.01887076534330845,
427
+ "learning_rate": 2.3241460769982814e-05,
428
+ "epoch": 2.36,
429
+ "step": 590
430
+ },
431
+ {
432
+ "loss": 0.1173,
433
+ "grad_norm": 0.021023280918598175,
434
+ "learning_rate": 2.0544250332256276e-05,
435
+ "epoch": 2.4,
436
+ "step": 600
437
+ },
438
+ {
439
+ "loss": 0.1185,
440
+ "grad_norm": 0.021073181182146072,
441
+ "learning_rate": 1.799539027680216e-05,
442
+ "epoch": 2.44,
443
+ "step": 610
444
+ },
445
+ {
446
+ "loss": 0.1176,
447
+ "grad_norm": 0.021841416135430336,
448
+ "learning_rate": 1.5599639533781853e-05,
449
+ "epoch": 2.48,
450
+ "step": 620
451
+ },
452
+ {
453
+ "loss": 0.1178,
454
+ "grad_norm": 0.02489841915667057,
455
+ "learning_rate": 1.3361471165749562e-05,
456
+ "epoch": 2.52,
457
+ "step": 630
458
+ },
459
+ {
460
+ "loss": 0.1162,
461
+ "grad_norm": 0.025476103648543358,
462
+ "learning_rate": 1.1285064016078784e-05,
463
+ "epoch": 2.56,
464
+ "step": 640
465
+ },
466
+ {
467
+ "loss": 0.1191,
468
+ "grad_norm": 0.022609952837228775,
469
+ "learning_rate": 9.374294906720082e-06,
470
+ "epoch": 2.6,
471
+ "step": 650
472
+ },
473
+ {
474
+ "loss": 0.1169,
475
+ "grad_norm": 0.021457456052303314,
476
+ "learning_rate": 7.63273139985733e-06,
477
+ "epoch": 2.64,
478
+ "step": 660
479
+ },
480
+ {
481
+ "loss": 0.1186,
482
+ "grad_norm": 0.02362852729856968,
483
+ "learning_rate": 6.063625136977447e-06,
484
+ "epoch": 2.68,
485
+ "step": 670
486
+ },
487
+ {
488
+ "loss": 0.1169,
489
+ "grad_norm": 0.023836638778448105,
490
+ "learning_rate": 4.669905767789884e-06,
491
+ "epoch": 2.7199999999999998,
492
+ "step": 680
493
+ },
494
+ {
495
+ "loss": 0.1186,
496
+ "grad_norm": 0.01990230567753315,
497
+ "learning_rate": 3.454175480330857e-06,
498
+ "epoch": 2.76,
499
+ "step": 690
500
+ },
501
+ {
502
+ "loss": 0.1166,
503
+ "grad_norm": 0.020747952163219452,
504
+ "learning_rate": 2.418704142465722e-06,
505
+ "epoch": 2.8,
506
+ "step": 700
507
+ },
508
+ {
509
+ "loss": 0.1145,
510
+ "grad_norm": 0.023040590807795525,
511
+ "learning_rate": 1.56542506385986e-06,
512
+ "epoch": 2.84,
513
+ "step": 710
514
+ },
515
+ {
516
+ "loss": 0.1167,
517
+ "grad_norm": 0.01842990331351757,
518
+ "learning_rate": 8.959313863315389e-07,
519
+ "epoch": 2.88,
520
+ "step": 720
521
+ },
522
+ {
523
+ "loss": 0.1167,
524
+ "grad_norm": 0.02250193990767002,
525
+ "learning_rate": 4.114731093257884e-07,
526
+ "epoch": 2.92,
527
+ "step": 730
528
+ },
529
+ {
530
+ "loss": 0.1172,
531
+ "grad_norm": 0.021436356008052826,
532
+ "learning_rate": 1.129547560632771e-07,
533
+ "epoch": 2.96,
534
+ "step": 740
535
+ },
536
+ {
537
+ "loss": 0.1156,
538
+ "grad_norm": 0.02157304249703884,
539
+ "learning_rate": 9.336847214269639e-10,
540
+ "epoch": 3.0,
541
+ "step": 750
542
+ },
543
+ {
544
+ "eval_loss": 0.11669337749481201,
545
+ "eval_runtime": 114.507,
546
+ "eval_samples_per_second": 4.367,
547
+ "eval_steps_per_second": 1.092,
548
+ "epoch": 3.0,
549
+ "step": 750
550
+ },
551
+ {
552
+ "train_runtime": 9628.0255,
553
+ "train_samples_per_second": 1.246,
554
+ "train_steps_per_second": 0.078,
555
+ "total_flos": 1.048558039990272e+18,
556
+ "train_loss": 0.14366791375478108,
557
+ "epoch": 3.0,
558
+ "step": 750
559
+ }
560
+ ]
vocab.json ADDED
The diff for this file is too large to render. See raw diff