finalform commited on
Commit
cfb75ed
·
verified ·
1 Parent(s): 2109154

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-7B-Instruct
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:Qwen/Qwen2.5-7B-Instruct
7
+ - lora
8
+ - sft
9
+ - transformers
10
+ - trl
11
+ ---
12
+
13
+ # Model Card for Model ID
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+
19
+ ## Model Details
20
+
21
+ ### Model Description
22
+
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+
26
+
27
+ - **Developed by:** [More Information Needed]
28
+ - **Funded by [optional]:** [More Information Needed]
29
+ - **Shared by [optional]:** [More Information Needed]
30
+ - **Model type:** [More Information Needed]
31
+ - **Language(s) (NLP):** [More Information Needed]
32
+ - **License:** [More Information Needed]
33
+ - **Finetuned from model [optional]:** [More Information Needed]
34
+
35
+ ### Model Sources [optional]
36
+
37
+ <!-- Provide the basic links for the model. -->
38
+
39
+ - **Repository:** [More Information Needed]
40
+ - **Paper [optional]:** [More Information Needed]
41
+ - **Demo [optional]:** [More Information Needed]
42
+
43
+ ## Uses
44
+
45
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
+
47
+ ### Direct Use
48
+
49
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
+
51
+ [More Information Needed]
52
+
53
+ ### Downstream Use [optional]
54
+
55
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
+
57
+ [More Information Needed]
58
+
59
+ ### Out-of-Scope Use
60
+
61
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
+
63
+ [More Information Needed]
64
+
65
+ ## Bias, Risks, and Limitations
66
+
67
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
+
69
+ [More Information Needed]
70
+
71
+ ### Recommendations
72
+
73
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
74
+
75
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
76
+
77
+ ## How to Get Started with the Model
78
+
79
+ Use the code below to get started with the model.
80
+
81
+ [More Information Needed]
82
+
83
+ ## Training Details
84
+
85
+ ### Training Data
86
+
87
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
88
+
89
+ [More Information Needed]
90
+
91
+ ### Training Procedure
92
+
93
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
94
+
95
+ #### Preprocessing [optional]
96
+
97
+ [More Information Needed]
98
+
99
+
100
+ #### Training Hyperparameters
101
+
102
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
103
+
104
+ #### Speeds, Sizes, Times [optional]
105
+
106
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
107
+
108
+ [More Information Needed]
109
+
110
+ ## Evaluation
111
+
112
+ <!-- This section describes the evaluation protocols and provides the results. -->
113
+
114
+ ### Testing Data, Factors & Metrics
115
+
116
+ #### Testing Data
117
+
118
+ <!-- This should link to a Dataset Card if possible. -->
119
+
120
+ [More Information Needed]
121
+
122
+ #### Factors
123
+
124
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
125
+
126
+ [More Information Needed]
127
+
128
+ #### Metrics
129
+
130
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
131
+
132
+ [More Information Needed]
133
+
134
+ ### Results
135
+
136
+ [More Information Needed]
137
+
138
+ #### Summary
139
+
140
+
141
+
142
+ ## Model Examination [optional]
143
+
144
+ <!-- Relevant interpretability work for the model goes here -->
145
+
146
+ [More Information Needed]
147
+
148
+ ## Environmental Impact
149
+
150
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
151
+
152
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
153
+
154
+ - **Hardware Type:** [More Information Needed]
155
+ - **Hours used:** [More Information Needed]
156
+ - **Cloud Provider:** [More Information Needed]
157
+ - **Compute Region:** [More Information Needed]
158
+ - **Carbon Emitted:** [More Information Needed]
159
+
160
+ ## Technical Specifications [optional]
161
+
162
+ ### Model Architecture and Objective
163
+
164
+ [More Information Needed]
165
+
166
+ ### Compute Infrastructure
167
+
168
+ [More Information Needed]
169
+
170
+ #### Hardware
171
+
172
+ [More Information Needed]
173
+
174
+ #### Software
175
+
176
+ [More Information Needed]
177
+
178
+ ## Citation [optional]
179
+
180
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
181
+
182
+ **BibTeX:**
183
+
184
+ [More Information Needed]
185
+
186
+ **APA:**
187
+
188
+ [More Information Needed]
189
+
190
+ ## Glossary [optional]
191
+
192
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
193
+
194
+ [More Information Needed]
195
+
196
+ ## More Information [optional]
197
+
198
+ [More Information Needed]
199
+
200
+ ## Model Card Authors [optional]
201
+
202
+ [More Information Needed]
203
+
204
+ ## Model Card Contact
205
+
206
+ [More Information Needed]
207
+ ### Framework versions
208
+
209
+ - PEFT 0.17.0
adapter_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "Qwen/Qwen2.5-7B-Instruct",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 16,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.1,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "qalora_group_size": 16,
24
+ "r": 64,
25
+ "rank_pattern": {},
26
+ "revision": null,
27
+ "target_modules": [
28
+ "k_proj",
29
+ "down_proj",
30
+ "o_proj",
31
+ "q_proj",
32
+ "v_proj",
33
+ "up_proj",
34
+ "gate_proj"
35
+ ],
36
+ "target_parameters": null,
37
+ "task_type": "CAUSAL_LM",
38
+ "trainable_token_indices": null,
39
+ "use_dora": false,
40
+ "use_qalora": false,
41
+ "use_rslora": false
42
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a0c1f01af04ec0d51c229f63e9191c2baab2c2e7a1ad6795ccffe749aae29ff
3
+ size 645975704
added_tokens.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|file_sep|>": 151664,
8
+ "<|fim_middle|>": 151660,
9
+ "<|fim_pad|>": 151662,
10
+ "<|fim_prefix|>": 151659,
11
+ "<|fim_suffix|>": 151661,
12
+ "<|im_end|>": 151645,
13
+ "<|im_start|>": 151644,
14
+ "<|image_pad|>": 151655,
15
+ "<|object_ref_end|>": 151647,
16
+ "<|object_ref_start|>": 151646,
17
+ "<|quad_end|>": 151651,
18
+ "<|quad_start|>": 151650,
19
+ "<|repo_name|>": 151663,
20
+ "<|video_pad|>": 151656,
21
+ "<|vision_end|>": 151653,
22
+ "<|vision_pad|>": 151654,
23
+ "<|vision_start|>": 151652
24
+ }
chat_template.jinja ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0]['role'] == 'system' %}
4
+ {{- messages[0]['content'] }}
5
+ {%- else %}
6
+ {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}
7
+ {%- endif %}
8
+ {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
9
+ {%- for tool in tools %}
10
+ {{- "\n" }}
11
+ {{- tool | tojson }}
12
+ {%- endfor %}
13
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
14
+ {%- else %}
15
+ {%- if messages[0]['role'] == 'system' %}
16
+ {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
17
+ {%- else %}
18
+ {{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
19
+ {%- endif %}
20
+ {%- endif %}
21
+ {%- for message in messages %}
22
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
23
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
24
+ {%- elif message.role == "assistant" %}
25
+ {{- '<|im_start|>' + message.role }}
26
+ {%- if message.content %}
27
+ {{- '\n' + message.content }}
28
+ {%- endif %}
29
+ {%- for tool_call in message.tool_calls %}
30
+ {%- if tool_call.function is defined %}
31
+ {%- set tool_call = tool_call.function %}
32
+ {%- endif %}
33
+ {{- '\n<tool_call>\n{"name": "' }}
34
+ {{- tool_call.name }}
35
+ {{- '", "arguments": ' }}
36
+ {{- tool_call.arguments | tojson }}
37
+ {{- '}\n</tool_call>' }}
38
+ {%- endfor %}
39
+ {{- '<|im_end|>\n' }}
40
+ {%- elif message.role == "tool" %}
41
+ {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
42
+ {{- '<|im_start|>user' }}
43
+ {%- endif %}
44
+ {{- '\n<tool_response>\n' }}
45
+ {{- message.content }}
46
+ {{- '\n</tool_response>' }}
47
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
48
+ {{- '<|im_end|>\n' }}
49
+ {%- endif %}
50
+ {%- endif %}
51
+ {%- endfor %}
52
+ {%- if add_generation_prompt %}
53
+ {{- '<|im_start|>assistant\n' }}
54
+ {%- endif %}
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85a21d03c2ea98d53c52fd96d7e7982e7d6b7827185339dabc6a151f19b15814
3
+ size 1292087499
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:994c2cba2555eef301d8087ae1484ed0e7252f44df4637c6e9af3389b996ceee
3
+ size 14645
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b1736ec6627ebf927133b64702a5b6824ab5d43b5017e4277694c355a4f042e
3
+ size 1465
special_tokens_map.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": "<|im_end|>"
25
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3f9d93e80cff961819dcba7d892cf9656e086a0cf83cdbef23f10c1a493faa2
3
+ size 11422061
tokenizer_config.json ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "clean_up_tokenization_spaces": false,
199
+ "eos_token": "<|im_end|>",
200
+ "errors": "replace",
201
+ "extra_special_tokens": {},
202
+ "model_max_length": 131072,
203
+ "pad_token": "<|im_end|>",
204
+ "split_special_tokens": false,
205
+ "tokenizer_class": "Qwen2Tokenizer",
206
+ "unk_token": null
207
+ }
trainer_state.json ADDED
@@ -0,0 +1,505 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 3.0,
6
+ "eval_steps": 500,
7
+ "global_step": 1245,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.060350030175015085,
14
+ "grad_norm": 0.292353093624115,
15
+ "learning_rate": 0.00013971818181818181,
16
+ "loss": 1.8075,
17
+ "mean_token_accuracy": 0.6316984993219376,
18
+ "num_tokens": 155458.0,
19
+ "step": 25
20
+ },
21
+ {
22
+ "epoch": 0.12070006035003017,
23
+ "grad_norm": 0.23519554734230042,
24
+ "learning_rate": 0.00028525795454545453,
25
+ "loss": 0.8594,
26
+ "mean_token_accuracy": 0.7813338875770569,
27
+ "num_tokens": 280957.0,
28
+ "step": 50
29
+ },
30
+ {
31
+ "epoch": 0.18105009052504525,
32
+ "grad_norm": 0.17745567858219147,
33
+ "learning_rate": 0.0004307977272727273,
34
+ "loss": 0.6189,
35
+ "mean_token_accuracy": 0.8301489073038101,
36
+ "num_tokens": 439583.0,
37
+ "step": 75
38
+ },
39
+ {
40
+ "epoch": 0.24140012070006034,
41
+ "grad_norm": 0.28904563188552856,
42
+ "learning_rate": 0.0005122807260672283,
43
+ "loss": 0.514,
44
+ "mean_token_accuracy": 0.8567226785421371,
45
+ "num_tokens": 566372.0,
46
+ "step": 100
47
+ },
48
+ {
49
+ "epoch": 0.30175015087507545,
50
+ "grad_norm": 0.18326736986637115,
51
+ "learning_rate": 0.0005120935869832972,
52
+ "loss": 0.3703,
53
+ "mean_token_accuracy": 0.8942542725801468,
54
+ "num_tokens": 721871.0,
55
+ "step": 125
56
+ },
57
+ {
58
+ "epoch": 0.3621001810500905,
59
+ "grad_norm": 0.22670747339725494,
60
+ "learning_rate": 0.0005117075078651932,
61
+ "loss": 0.3122,
62
+ "mean_token_accuracy": 0.9121941888332367,
63
+ "num_tokens": 848123.0,
64
+ "step": 150
65
+ },
66
+ {
67
+ "epoch": 0.4224502112251056,
68
+ "grad_norm": 0.1902594268321991,
69
+ "learning_rate": 0.0005111227888047993,
70
+ "loss": 0.2411,
71
+ "mean_token_accuracy": 0.9314639317989349,
72
+ "num_tokens": 1005664.0,
73
+ "step": 175
74
+ },
75
+ {
76
+ "epoch": 0.4828002414001207,
77
+ "grad_norm": 0.29753610491752625,
78
+ "learning_rate": 0.0005103398842930102,
79
+ "loss": 0.2266,
80
+ "mean_token_accuracy": 0.9340476477146149,
81
+ "num_tokens": 1132284.0,
82
+ "step": 200
83
+ },
84
+ {
85
+ "epoch": 0.5431502715751357,
86
+ "grad_norm": 0.1415078341960907,
87
+ "learning_rate": 0.0005093594028664655,
88
+ "loss": 0.1822,
89
+ "mean_token_accuracy": 0.9487657606601715,
90
+ "num_tokens": 1290915.0,
91
+ "step": 225
92
+ },
93
+ {
94
+ "epoch": 0.6035003017501509,
95
+ "grad_norm": 0.19810789823532104,
96
+ "learning_rate": 0.0005081821066345455,
97
+ "loss": 0.1458,
98
+ "mean_token_accuracy": 0.9581668329238892,
99
+ "num_tokens": 1418595.0,
100
+ "step": 250
101
+ },
102
+ {
103
+ "epoch": 0.663850331925166,
104
+ "grad_norm": 0.1245037168264389,
105
+ "learning_rate": 0.0005068089106869988,
106
+ "loss": 0.1361,
107
+ "mean_token_accuracy": 0.9611250156164169,
108
+ "num_tokens": 1576137.0,
109
+ "step": 275
110
+ },
111
+ {
112
+ "epoch": 0.724200362100181,
113
+ "grad_norm": 0.18405039608478546,
114
+ "learning_rate": 0.0005052408823826598,
115
+ "loss": 0.1393,
116
+ "mean_token_accuracy": 0.9614962357282638,
117
+ "num_tokens": 1701485.0,
118
+ "step": 300
119
+ },
120
+ {
121
+ "epoch": 0.7845503922751962,
122
+ "grad_norm": 0.18599726259708405,
123
+ "learning_rate": 0.000503479240519812,
124
+ "loss": 0.1137,
125
+ "mean_token_accuracy": 0.9682038247585296,
126
+ "num_tokens": 1859818.0,
127
+ "step": 325
128
+ },
129
+ {
130
+ "epoch": 0.8449004224502112,
131
+ "grad_norm": 0.1856354922056198,
132
+ "learning_rate": 0.0005015253543888389,
133
+ "loss": 0.0891,
134
+ "mean_token_accuracy": 0.9745866447687149,
135
+ "num_tokens": 1987015.0,
136
+ "step": 350
137
+ },
138
+ {
139
+ "epoch": 0.9052504526252263,
140
+ "grad_norm": 0.10939127951860428,
141
+ "learning_rate": 0.0004993807427079012,
142
+ "loss": 0.1001,
143
+ "mean_token_accuracy": 0.9714098435640335,
144
+ "num_tokens": 2146234.0,
145
+ "step": 375
146
+ },
147
+ {
148
+ "epoch": 0.9656004828002414,
149
+ "grad_norm": 0.3307282030582428,
150
+ "learning_rate": 0.0004970470724424662,
151
+ "loss": 0.0884,
152
+ "mean_token_accuracy": 0.9754749721288681,
153
+ "num_tokens": 2273585.0,
154
+ "step": 400
155
+ },
156
+ {
157
+ "epoch": 1.0,
158
+ "eval_loss": 0.08956408500671387,
159
+ "eval_mean_token_accuracy": 0.9750892632716411,
160
+ "eval_num_tokens": 2354180.0,
161
+ "eval_runtime": 16.024,
162
+ "eval_samples_per_second": 23.028,
163
+ "eval_steps_per_second": 11.545,
164
+ "step": 415
165
+ },
166
+ {
167
+ "epoch": 1.024140012070006,
168
+ "grad_norm": 0.16935159265995026,
169
+ "learning_rate": 0.0004945261575096078,
170
+ "loss": 0.101,
171
+ "mean_token_accuracy": 0.9729157378993083,
172
+ "num_tokens": 2425025.0,
173
+ "step": 425
174
+ },
175
+ {
176
+ "epoch": 1.0844900422450212,
177
+ "grad_norm": 0.12800893187522888,
178
+ "learning_rate": 0.0004918199573680834,
179
+ "loss": 0.0615,
180
+ "mean_token_accuracy": 0.9824073499441147,
181
+ "num_tokens": 2568833.0,
182
+ "step": 450
183
+ },
184
+ {
185
+ "epoch": 1.1448400724200363,
186
+ "grad_norm": 0.10300774872303009,
187
+ "learning_rate": 0.0004889305754952839,
188
+ "loss": 0.0805,
189
+ "mean_token_accuracy": 0.9773273587226867,
190
+ "num_tokens": 2710895.0,
191
+ "step": 475
192
+ },
193
+ {
194
+ "epoch": 1.2051901025950513,
195
+ "grad_norm": 0.11092197895050049,
196
+ "learning_rate": 0.0004858602577522418,
197
+ "loss": 0.0588,
198
+ "mean_token_accuracy": 0.9833286923170089,
199
+ "num_tokens": 2853318.0,
200
+ "step": 500
201
+ },
202
+ {
203
+ "epoch": 1.2655401327700664,
204
+ "grad_norm": 0.0948944017291069,
205
+ "learning_rate": 0.0004826113906379664,
206
+ "loss": 0.0838,
207
+ "mean_token_accuracy": 0.9770084321498871,
208
+ "num_tokens": 2994882.0,
209
+ "step": 525
210
+ },
211
+ {
212
+ "epoch": 1.3258901629450814,
213
+ "grad_norm": 0.12024246156215668,
214
+ "learning_rate": 0.00047918649943446345,
215
+ "loss": 0.0572,
216
+ "mean_token_accuracy": 0.9838370496034622,
217
+ "num_tokens": 3137930.0,
218
+ "step": 550
219
+ },
220
+ {
221
+ "epoch": 1.3862401931200965,
222
+ "grad_norm": 0.12121743708848953,
223
+ "learning_rate": 0.0004755882462438826,
224
+ "loss": 0.0611,
225
+ "mean_token_accuracy": 0.9828894352912902,
226
+ "num_tokens": 3279894.0,
227
+ "step": 575
228
+ },
229
+ {
230
+ "epoch": 1.4465902232951118,
231
+ "grad_norm": 0.19103674590587616,
232
+ "learning_rate": 0.000471819427919316,
233
+ "loss": 0.0455,
234
+ "mean_token_accuracy": 0.9865969383716583,
235
+ "num_tokens": 3420462.0,
236
+ "step": 600
237
+ },
238
+ {
239
+ "epoch": 1.5069402534701268,
240
+ "grad_norm": 0.06473100930452347,
241
+ "learning_rate": 0.0004678829738908584,
242
+ "loss": 0.0647,
243
+ "mean_token_accuracy": 0.9815235859155655,
244
+ "num_tokens": 3561941.0,
245
+ "step": 625
246
+ },
247
+ {
248
+ "epoch": 1.567290283645142,
249
+ "grad_norm": 0.09202416986227036,
250
+ "learning_rate": 0.0004637819438886175,
251
+ "loss": 0.0517,
252
+ "mean_token_accuracy": 0.9851528346538544,
253
+ "num_tokens": 3703510.0,
254
+ "step": 650
255
+ },
256
+ {
257
+ "epoch": 1.627640313820157,
258
+ "grad_norm": 0.08362529426813126,
259
+ "learning_rate": 0.00045951952556444426,
260
+ "loss": 0.0642,
261
+ "mean_token_accuracy": 0.9822886544466018,
262
+ "num_tokens": 3842063.0,
263
+ "step": 675
264
+ },
265
+ {
266
+ "epoch": 1.687990343995172,
267
+ "grad_norm": 0.0631742924451828,
268
+ "learning_rate": 0.0004550990320142324,
269
+ "loss": 0.0441,
270
+ "mean_token_accuracy": 0.9875086861848831,
271
+ "num_tokens": 3984506.0,
272
+ "step": 700
273
+ },
274
+ {
275
+ "epoch": 1.748340374170187,
276
+ "grad_norm": 0.07872219383716583,
277
+ "learning_rate": 0.00045052389920271276,
278
+ "loss": 0.0569,
279
+ "mean_token_accuracy": 0.9842114639282227,
280
+ "num_tokens": 4127213.0,
281
+ "step": 725
282
+ },
283
+ {
284
+ "epoch": 1.8086904043452021,
285
+ "grad_norm": 0.08871777355670929,
286
+ "learning_rate": 0.0004457976832927436,
287
+ "loss": 0.0437,
288
+ "mean_token_accuracy": 0.9873430663347245,
289
+ "num_tokens": 4270185.0,
290
+ "step": 750
291
+ },
292
+ {
293
+ "epoch": 1.8690404345202172,
294
+ "grad_norm": 0.08713535219430923,
295
+ "learning_rate": 0.00044092405788117396,
296
+ "loss": 0.0583,
297
+ "mean_token_accuracy": 0.9836823076009751,
298
+ "num_tokens": 4412354.0,
299
+ "step": 775
300
+ },
301
+ {
302
+ "epoch": 1.9293904646952322,
303
+ "grad_norm": 0.10155721753835678,
304
+ "learning_rate": 0.00043590681114342696,
305
+ "loss": 0.0404,
306
+ "mean_token_accuracy": 0.9879835307598114,
307
+ "num_tokens": 4556520.0,
308
+ "step": 800
309
+ },
310
+ {
311
+ "epoch": 1.9897404948702473,
312
+ "grad_norm": 0.0794239267706871,
313
+ "learning_rate": 0.0004307498428890239,
314
+ "loss": 0.045,
315
+ "mean_token_accuracy": 0.9872637808322906,
316
+ "num_tokens": 4688903.0,
317
+ "step": 825
318
+ },
319
+ {
320
+ "epoch": 2.0,
321
+ "eval_loss": 0.05410139262676239,
322
+ "eval_mean_token_accuracy": 0.9851650663324305,
323
+ "eval_num_tokens": 4708360.0,
324
+ "eval_runtime": 16.0082,
325
+ "eval_samples_per_second": 23.051,
326
+ "eval_steps_per_second": 11.557,
327
+ "step": 830
328
+ },
329
+ {
330
+ "epoch": 2.048280024140012,
331
+ "grad_norm": 0.10252567380666733,
332
+ "learning_rate": 0.00042545716153033746,
333
+ "loss": 0.0495,
334
+ "mean_token_accuracy": 0.9853065284257082,
335
+ "num_tokens": 4838468.0,
336
+ "step": 850
337
+ },
338
+ {
339
+ "epoch": 2.1086300543150274,
340
+ "grad_norm": 0.04914547875523567,
341
+ "learning_rate": 0.0004200328809669296,
342
+ "loss": 0.0313,
343
+ "mean_token_accuracy": 0.9909292554855347,
344
+ "num_tokens": 4972061.0,
345
+ "step": 875
346
+ },
347
+ {
348
+ "epoch": 2.1689800844900424,
349
+ "grad_norm": 0.06335192173719406,
350
+ "learning_rate": 0.00041448121738789633,
351
+ "loss": 0.0449,
352
+ "mean_token_accuracy": 0.9870324164628983,
353
+ "num_tokens": 5123609.0,
354
+ "step": 900
355
+ },
356
+ {
357
+ "epoch": 2.2293301146650575,
358
+ "grad_norm": 0.09024782478809357,
359
+ "learning_rate": 0.0004088064859947051,
360
+ "loss": 0.0336,
361
+ "mean_token_accuracy": 0.9899903804063797,
362
+ "num_tokens": 5255900.0,
363
+ "step": 925
364
+ },
365
+ {
366
+ "epoch": 2.2896801448400725,
367
+ "grad_norm": 0.06487799435853958,
368
+ "learning_rate": 0.0004030130976470715,
369
+ "loss": 0.0471,
370
+ "mean_token_accuracy": 0.9861943638324737,
371
+ "num_tokens": 5408377.0,
372
+ "step": 950
373
+ },
374
+ {
375
+ "epoch": 2.3500301750150876,
376
+ "grad_norm": 0.03881136327981949,
377
+ "learning_rate": 0.00039710555543448267,
378
+ "loss": 0.033,
379
+ "mean_token_accuracy": 0.9898175239562989,
380
+ "num_tokens": 5540979.0,
381
+ "step": 975
382
+ },
383
+ {
384
+ "epoch": 2.4103802051901027,
385
+ "grad_norm": 0.05819237604737282,
386
+ "learning_rate": 0.0003910884511760325,
387
+ "loss": 0.0428,
388
+ "mean_token_accuracy": 0.9870308661460876,
389
+ "num_tokens": 5693030.0,
390
+ "step": 1000
391
+ },
392
+ {
393
+ "epoch": 2.4707302353651177,
394
+ "grad_norm": 0.06926289945840836,
395
+ "learning_rate": 0.00038496646185128854,
396
+ "loss": 0.0288,
397
+ "mean_token_accuracy": 0.9914705574512481,
398
+ "num_tokens": 5827027.0,
399
+ "step": 1025
400
+ },
401
+ {
402
+ "epoch": 2.5310802655401328,
403
+ "grad_norm": 0.09394887089729309,
404
+ "learning_rate": 0.000378744345964966,
405
+ "loss": 0.0439,
406
+ "mean_token_accuracy": 0.9866676324605942,
407
+ "num_tokens": 5975797.0,
408
+ "step": 1050
409
+ },
410
+ {
411
+ "epoch": 2.591430295715148,
412
+ "grad_norm": 0.07537297159433365,
413
+ "learning_rate": 0.0003724269398482333,
414
+ "loss": 0.0316,
415
+ "mean_token_accuracy": 0.9907770365476608,
416
+ "num_tokens": 6107036.0,
417
+ "step": 1075
418
+ },
419
+ {
420
+ "epoch": 2.651780325890163,
421
+ "grad_norm": 0.044363752007484436,
422
+ "learning_rate": 0.00036601915389952434,
423
+ "loss": 0.046,
424
+ "mean_token_accuracy": 0.9861960715055466,
425
+ "num_tokens": 6258873.0,
426
+ "step": 1100
427
+ },
428
+ {
429
+ "epoch": 2.712130356065178,
430
+ "grad_norm": 0.084761843085289,
431
+ "learning_rate": 0.00035952596876778076,
432
+ "loss": 0.031,
433
+ "mean_token_accuracy": 0.9905279046297073,
434
+ "num_tokens": 6392411.0,
435
+ "step": 1125
436
+ },
437
+ {
438
+ "epoch": 2.772480386240193,
439
+ "grad_norm": 0.05843805894255638,
440
+ "learning_rate": 0.00035295243148108894,
441
+ "loss": 0.0441,
442
+ "mean_token_accuracy": 0.9872340881824493,
443
+ "num_tokens": 6542051.0,
444
+ "step": 1150
445
+ },
446
+ {
447
+ "epoch": 2.832830416415208,
448
+ "grad_norm": 0.05149897560477257,
449
+ "learning_rate": 0.00034630365152372165,
450
+ "loss": 0.0286,
451
+ "mean_token_accuracy": 0.9911447340250015,
452
+ "num_tokens": 6674951.0,
453
+ "step": 1175
454
+ },
455
+ {
456
+ "epoch": 2.8931804465902236,
457
+ "grad_norm": 0.04212498292326927,
458
+ "learning_rate": 0.00033958479686463464,
459
+ "loss": 0.042,
460
+ "mean_token_accuracy": 0.9873944985866546,
461
+ "num_tokens": 6827063.0,
462
+ "step": 1200
463
+ },
464
+ {
465
+ "epoch": 2.9535304767652386,
466
+ "grad_norm": 0.02952578291296959,
467
+ "learning_rate": 0.00033280108994050315,
468
+ "loss": 0.0288,
469
+ "mean_token_accuracy": 0.9914515954256058,
470
+ "num_tokens": 6960300.0,
471
+ "step": 1225
472
+ },
473
+ {
474
+ "epoch": 3.0,
475
+ "eval_loss": 0.045313552021980286,
476
+ "eval_mean_token_accuracy": 0.9875944820610253,
477
+ "eval_num_tokens": 7062540.0,
478
+ "eval_runtime": 15.9787,
479
+ "eval_samples_per_second": 23.093,
480
+ "eval_steps_per_second": 11.578,
481
+ "step": 1245
482
+ }
483
+ ],
484
+ "logging_steps": 25,
485
+ "max_steps": 2905,
486
+ "num_input_tokens_seen": 0,
487
+ "num_train_epochs": 7,
488
+ "save_steps": 500,
489
+ "stateful_callbacks": {
490
+ "TrainerControl": {
491
+ "args": {
492
+ "should_epoch_stop": false,
493
+ "should_evaluate": false,
494
+ "should_log": false,
495
+ "should_save": true,
496
+ "should_training_stop": false
497
+ },
498
+ "attributes": {}
499
+ }
500
+ },
501
+ "total_flos": 3.0685210928193024e+17,
502
+ "train_batch_size": 2,
503
+ "trial_name": null,
504
+ "trial_params": null
505
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3aca81315bde14ece69eb9f4dddd5f4b7bb5393ac99e6a78ae025523ceef1a1d
3
+ size 6097
vocab.json ADDED
The diff for this file is too large to render. See raw diff