adammoss commited on
Commit
fe9a607
·
verified ·
1 Parent(s): 476153c

Training in progress, step 500, checkpoint

Browse files
.gitattributes CHANGED
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ last-checkpoint/tokenizer.json filter=lfs diff=lfs merge=lfs -text
last-checkpoint/README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-Coder-7B
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:Qwen/Qwen2.5-Coder-7B
7
+ - lora
8
+ - sft
9
+ - transformers
10
+ - trl
11
+ ---
12
+
13
+ # Model Card for Model ID
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+
19
+ ## Model Details
20
+
21
+ ### Model Description
22
+
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+
26
+
27
+ - **Developed by:** [More Information Needed]
28
+ - **Funded by [optional]:** [More Information Needed]
29
+ - **Shared by [optional]:** [More Information Needed]
30
+ - **Model type:** [More Information Needed]
31
+ - **Language(s) (NLP):** [More Information Needed]
32
+ - **License:** [More Information Needed]
33
+ - **Finetuned from model [optional]:** [More Information Needed]
34
+
35
+ ### Model Sources [optional]
36
+
37
+ <!-- Provide the basic links for the model. -->
38
+
39
+ - **Repository:** [More Information Needed]
40
+ - **Paper [optional]:** [More Information Needed]
41
+ - **Demo [optional]:** [More Information Needed]
42
+
43
+ ## Uses
44
+
45
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
+
47
+ ### Direct Use
48
+
49
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
+
51
+ [More Information Needed]
52
+
53
+ ### Downstream Use [optional]
54
+
55
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
+
57
+ [More Information Needed]
58
+
59
+ ### Out-of-Scope Use
60
+
61
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
+
63
+ [More Information Needed]
64
+
65
+ ## Bias, Risks, and Limitations
66
+
67
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
+
69
+ [More Information Needed]
70
+
71
+ ### Recommendations
72
+
73
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
74
+
75
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
76
+
77
+ ## How to Get Started with the Model
78
+
79
+ Use the code below to get started with the model.
80
+
81
+ [More Information Needed]
82
+
83
+ ## Training Details
84
+
85
+ ### Training Data
86
+
87
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
88
+
89
+ [More Information Needed]
90
+
91
+ ### Training Procedure
92
+
93
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
94
+
95
+ #### Preprocessing [optional]
96
+
97
+ [More Information Needed]
98
+
99
+
100
+ #### Training Hyperparameters
101
+
102
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
103
+
104
+ #### Speeds, Sizes, Times [optional]
105
+
106
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
107
+
108
+ [More Information Needed]
109
+
110
+ ## Evaluation
111
+
112
+ <!-- This section describes the evaluation protocols and provides the results. -->
113
+
114
+ ### Testing Data, Factors & Metrics
115
+
116
+ #### Testing Data
117
+
118
+ <!-- This should link to a Dataset Card if possible. -->
119
+
120
+ [More Information Needed]
121
+
122
+ #### Factors
123
+
124
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
125
+
126
+ [More Information Needed]
127
+
128
+ #### Metrics
129
+
130
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
131
+
132
+ [More Information Needed]
133
+
134
+ ### Results
135
+
136
+ [More Information Needed]
137
+
138
+ #### Summary
139
+
140
+
141
+
142
+ ## Model Examination [optional]
143
+
144
+ <!-- Relevant interpretability work for the model goes here -->
145
+
146
+ [More Information Needed]
147
+
148
+ ## Environmental Impact
149
+
150
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
151
+
152
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
153
+
154
+ - **Hardware Type:** [More Information Needed]
155
+ - **Hours used:** [More Information Needed]
156
+ - **Cloud Provider:** [More Information Needed]
157
+ - **Compute Region:** [More Information Needed]
158
+ - **Carbon Emitted:** [More Information Needed]
159
+
160
+ ## Technical Specifications [optional]
161
+
162
+ ### Model Architecture and Objective
163
+
164
+ [More Information Needed]
165
+
166
+ ### Compute Infrastructure
167
+
168
+ [More Information Needed]
169
+
170
+ #### Hardware
171
+
172
+ [More Information Needed]
173
+
174
+ #### Software
175
+
176
+ [More Information Needed]
177
+
178
+ ## Citation [optional]
179
+
180
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
181
+
182
+ **BibTeX:**
183
+
184
+ [More Information Needed]
185
+
186
+ **APA:**
187
+
188
+ [More Information Needed]
189
+
190
+ ## Glossary [optional]
191
+
192
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
193
+
194
+ [More Information Needed]
195
+
196
+ ## More Information [optional]
197
+
198
+ [More Information Needed]
199
+
200
+ ## Model Card Authors [optional]
201
+
202
+ [More Information Needed]
203
+
204
+ ## Model Card Contact
205
+
206
+ [More Information Needed]
207
+ ### Framework versions
208
+
209
+ - PEFT 0.18.0
last-checkpoint/adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "Qwen/Qwen2.5-Coder-7B",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 32,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": null,
25
+ "peft_type": "LORA",
26
+ "peft_version": "0.18.0",
27
+ "qalora_group_size": 16,
28
+ "r": 16,
29
+ "rank_pattern": {},
30
+ "revision": null,
31
+ "target_modules": [
32
+ "v_proj",
33
+ "o_proj",
34
+ "down_proj",
35
+ "q_proj",
36
+ "gate_proj",
37
+ "k_proj",
38
+ "up_proj"
39
+ ],
40
+ "target_parameters": null,
41
+ "task_type": "CAUSAL_LM",
42
+ "trainable_token_indices": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
last-checkpoint/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ee36c891f3a22e347d2dee6b81d0e0f4dbc9e9da81f5eda10a20ddda32faf75
3
+ size 80792880
last-checkpoint/added_tokens.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|file_sep|>": 151664,
8
+ "<|fim_middle|>": 151660,
9
+ "<|fim_pad|>": 151662,
10
+ "<|fim_prefix|>": 151659,
11
+ "<|fim_suffix|>": 151661,
12
+ "<|im_end|>": 151645,
13
+ "<|im_start|>": 151644,
14
+ "<|image_pad|>": 151655,
15
+ "<|object_ref_end|>": 151647,
16
+ "<|object_ref_start|>": 151646,
17
+ "<|quad_end|>": 151651,
18
+ "<|quad_start|>": 151650,
19
+ "<|repo_name|>": 151663,
20
+ "<|video_pad|>": 151656,
21
+ "<|vision_end|>": 151653,
22
+ "<|vision_pad|>": 151654,
23
+ "<|vision_start|>": 151652
24
+ }
last-checkpoint/chat_template.jinja ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0]['role'] == 'system' %}
4
+ {{- messages[0]['content'] }}
5
+ {%- else %}
6
+ {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}
7
+ {%- endif %}
8
+ {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
9
+ {%- for tool in tools %}
10
+ {{- "\n" }}
11
+ {{- tool | tojson }}
12
+ {%- endfor %}
13
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
14
+ {%- else %}
15
+ {%- if messages[0]['role'] == 'system' %}
16
+ {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
17
+ {%- else %}
18
+ {{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
19
+ {%- endif %}
20
+ {%- endif %}
21
+ {%- for message in messages %}
22
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
23
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
24
+ {%- elif message.role == "assistant" %}
25
+ {{- '<|im_start|>' + message.role }}
26
+ {%- if message.content %}
27
+ {{- '\n' + message.content }}
28
+ {%- endif %}
29
+ {%- for tool_call in message.tool_calls %}
30
+ {%- if tool_call.function is defined %}
31
+ {%- set tool_call = tool_call.function %}
32
+ {%- endif %}
33
+ {{- '\n<tool_call>\n{"name": "' }}
34
+ {{- tool_call.name }}
35
+ {{- '", "arguments": ' }}
36
+ {{- tool_call.arguments | tojson }}
37
+ {{- '}\n</tool_call>' }}
38
+ {%- endfor %}
39
+ {{- '<|im_end|>\n' }}
40
+ {%- elif message.role == "tool" %}
41
+ {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
42
+ {{- '<|im_start|>user' }}
43
+ {%- endif %}
44
+ {{- '\n<tool_response>\n' }}
45
+ {{- message.content }}
46
+ {{- '\n</tool_response>' }}
47
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
48
+ {{- '<|im_end|>\n' }}
49
+ {%- endif %}
50
+ {%- endif %}
51
+ {%- endfor %}
52
+ {%- if add_generation_prompt %}
53
+ {{- '<|im_start|>assistant\n' }}
54
+ {%- endif %}
last-checkpoint/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
last-checkpoint/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d82702bf81ebf981e821722866ba38fa409e7b3edf45193a286d08991b69130
3
+ size 161816251
last-checkpoint/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23a6fa08fb4b60be6b07c667513821f6d49cf047f23041aa94471552e36c4200
3
+ size 14645
last-checkpoint/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1bf87b70878fcf961e6c3a6a3908808259c2f062d0e4392ea9d53090ee99bae
3
+ size 1465
last-checkpoint/special_tokens_map.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": "<|endoftext|>"
25
+ }
last-checkpoint/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
3
+ size 11421896
last-checkpoint/tokenizer_config.json ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "clean_up_tokenization_spaces": false,
199
+ "eos_token": "<|endoftext|>",
200
+ "errors": "replace",
201
+ "extra_special_tokens": {},
202
+ "model_max_length": 32768,
203
+ "pad_token": "<|endoftext|>",
204
+ "split_special_tokens": false,
205
+ "tokenizer_class": "Qwen2Tokenizer",
206
+ "unk_token": null
207
+ }
last-checkpoint/trainer_state.json ADDED
@@ -0,0 +1,545 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 0.05105557399229061,
6
+ "eval_steps": 500,
7
+ "global_step": 500,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "entropy": 1.6105554603040217,
14
+ "epoch": 0.001021111479845812,
15
+ "grad_norm": 0.0771484375,
16
+ "learning_rate": 6.122448979591837e-06,
17
+ "loss": 1.6234,
18
+ "mean_token_accuracy": 0.6229987986385822,
19
+ "num_tokens": 162084.0,
20
+ "step": 10
21
+ },
22
+ {
23
+ "entropy": 1.4474539875984191,
24
+ "epoch": 0.002042222959691624,
25
+ "grad_norm": 0.08740234375,
26
+ "learning_rate": 1.2925170068027212e-05,
27
+ "loss": 1.4544,
28
+ "mean_token_accuracy": 0.6559550605714322,
29
+ "num_tokens": 324440.0,
30
+ "step": 20
31
+ },
32
+ {
33
+ "entropy": 1.5262735523283482,
34
+ "epoch": 0.0030633344395374364,
35
+ "grad_norm": 0.08056640625,
36
+ "learning_rate": 1.9727891156462584e-05,
37
+ "loss": 1.5385,
38
+ "mean_token_accuracy": 0.6397140312939882,
39
+ "num_tokens": 486213.0,
40
+ "step": 30
41
+ },
42
+ {
43
+ "entropy": 1.6379423663020134,
44
+ "epoch": 0.004084445919383248,
45
+ "grad_norm": 0.09423828125,
46
+ "learning_rate": 2.6530612244897963e-05,
47
+ "loss": 1.6491,
48
+ "mean_token_accuracy": 0.6172465864568949,
49
+ "num_tokens": 649147.0,
50
+ "step": 40
51
+ },
52
+ {
53
+ "entropy": 1.499549924582243,
54
+ "epoch": 0.0051055573992290606,
55
+ "grad_norm": 0.10546875,
56
+ "learning_rate": 3.3333333333333335e-05,
57
+ "loss": 1.5024,
58
+ "mean_token_accuracy": 0.6480301663279533,
59
+ "num_tokens": 811343.0,
60
+ "step": 50
61
+ },
62
+ {
63
+ "entropy": 1.5192804619669915,
64
+ "epoch": 0.006126668879074873,
65
+ "grad_norm": 0.11328125,
66
+ "learning_rate": 4.013605442176871e-05,
67
+ "loss": 1.5152,
68
+ "mean_token_accuracy": 0.6419547680765391,
69
+ "num_tokens": 972821.0,
70
+ "step": 60
71
+ },
72
+ {
73
+ "entropy": 1.5267759434878827,
74
+ "epoch": 0.007147780358920685,
75
+ "grad_norm": 0.1279296875,
76
+ "learning_rate": 4.6938775510204086e-05,
77
+ "loss": 1.5155,
78
+ "mean_token_accuracy": 0.6439953502267599,
79
+ "num_tokens": 1135605.0,
80
+ "step": 70
81
+ },
82
+ {
83
+ "entropy": 1.5248031470924617,
84
+ "epoch": 0.008168891838766497,
85
+ "grad_norm": 0.115234375,
86
+ "learning_rate": 5.374149659863946e-05,
87
+ "loss": 1.5244,
88
+ "mean_token_accuracy": 0.6396083779633045,
89
+ "num_tokens": 1298210.0,
90
+ "step": 80
91
+ },
92
+ {
93
+ "entropy": 1.5474840186536312,
94
+ "epoch": 0.009190003318612309,
95
+ "grad_norm": 0.2119140625,
96
+ "learning_rate": 6.0544217687074836e-05,
97
+ "loss": 1.5554,
98
+ "mean_token_accuracy": 0.6326705653220415,
99
+ "num_tokens": 1460914.0,
100
+ "step": 90
101
+ },
102
+ {
103
+ "entropy": 1.5328328274190426,
104
+ "epoch": 0.010211114798458121,
105
+ "grad_norm": 0.1513671875,
106
+ "learning_rate": 6.73469387755102e-05,
107
+ "loss": 1.5167,
108
+ "mean_token_accuracy": 0.6380768191069365,
109
+ "num_tokens": 1621803.0,
110
+ "step": 100
111
+ },
112
+ {
113
+ "entropy": 1.495370238274336,
114
+ "epoch": 0.011232226278303933,
115
+ "grad_norm": 0.1728515625,
116
+ "learning_rate": 7.414965986394559e-05,
117
+ "loss": 1.4942,
118
+ "mean_token_accuracy": 0.6465954400599003,
119
+ "num_tokens": 1783951.0,
120
+ "step": 110
121
+ },
122
+ {
123
+ "entropy": 1.508614156395197,
124
+ "epoch": 0.012253337758149746,
125
+ "grad_norm": 0.1708984375,
126
+ "learning_rate": 8.095238095238096e-05,
127
+ "loss": 1.5185,
128
+ "mean_token_accuracy": 0.6409287318587303,
129
+ "num_tokens": 1944945.0,
130
+ "step": 120
131
+ },
132
+ {
133
+ "entropy": 1.5043984994292259,
134
+ "epoch": 0.013274449237995558,
135
+ "grad_norm": 0.1787109375,
136
+ "learning_rate": 8.775510204081632e-05,
137
+ "loss": 1.4934,
138
+ "mean_token_accuracy": 0.6461466059088707,
139
+ "num_tokens": 2107353.0,
140
+ "step": 130
141
+ },
142
+ {
143
+ "entropy": 1.4835106305778027,
144
+ "epoch": 0.01429556071784137,
145
+ "grad_norm": 0.1806640625,
146
+ "learning_rate": 9.455782312925171e-05,
147
+ "loss": 1.4963,
148
+ "mean_token_accuracy": 0.6431496165692806,
149
+ "num_tokens": 2268885.0,
150
+ "step": 140
151
+ },
152
+ {
153
+ "entropy": 1.594026555120945,
154
+ "epoch": 0.015316672197687183,
155
+ "grad_norm": 0.205078125,
156
+ "learning_rate": 0.00010136054421768707,
157
+ "loss": 1.5827,
158
+ "mean_token_accuracy": 0.6312672674655915,
159
+ "num_tokens": 2431297.0,
160
+ "step": 150
161
+ },
162
+ {
163
+ "entropy": 1.5235594533383847,
164
+ "epoch": 0.016337783677532993,
165
+ "grad_norm": 0.1826171875,
166
+ "learning_rate": 0.00010816326530612246,
167
+ "loss": 1.5358,
168
+ "mean_token_accuracy": 0.6397741161286831,
169
+ "num_tokens": 2593913.0,
170
+ "step": 160
171
+ },
172
+ {
173
+ "entropy": 1.5062347128987312,
174
+ "epoch": 0.017358895157378807,
175
+ "grad_norm": 0.158203125,
176
+ "learning_rate": 0.00011496598639455783,
177
+ "loss": 1.4971,
178
+ "mean_token_accuracy": 0.64527667760849,
179
+ "num_tokens": 2756310.0,
180
+ "step": 170
181
+ },
182
+ {
183
+ "entropy": 1.5257433257997035,
184
+ "epoch": 0.018380006637224618,
185
+ "grad_norm": 0.169921875,
186
+ "learning_rate": 0.0001217687074829932,
187
+ "loss": 1.5298,
188
+ "mean_token_accuracy": 0.6407827861607075,
189
+ "num_tokens": 2917778.0,
190
+ "step": 180
191
+ },
192
+ {
193
+ "entropy": 1.4068532407283783,
194
+ "epoch": 0.01940111811707043,
195
+ "grad_norm": 0.1689453125,
196
+ "learning_rate": 0.00012857142857142858,
197
+ "loss": 1.4027,
198
+ "mean_token_accuracy": 0.6642264045774937,
199
+ "num_tokens": 3080534.0,
200
+ "step": 190
201
+ },
202
+ {
203
+ "entropy": 1.5637503005564213,
204
+ "epoch": 0.020422229596916242,
205
+ "grad_norm": 0.1533203125,
206
+ "learning_rate": 0.00013537414965986394,
207
+ "loss": 1.5514,
208
+ "mean_token_accuracy": 0.635703957080841,
209
+ "num_tokens": 3243193.0,
210
+ "step": 200
211
+ },
212
+ {
213
+ "entropy": 1.4107303723692894,
214
+ "epoch": 0.021443341076762056,
215
+ "grad_norm": 0.21484375,
216
+ "learning_rate": 0.00014217687074829933,
217
+ "loss": 1.4219,
218
+ "mean_token_accuracy": 0.6650175258517266,
219
+ "num_tokens": 3405604.0,
220
+ "step": 210
221
+ },
222
+ {
223
+ "entropy": 1.3540484137833118,
224
+ "epoch": 0.022464452556607867,
225
+ "grad_norm": 0.1552734375,
226
+ "learning_rate": 0.00014897959183673472,
227
+ "loss": 1.3583,
228
+ "mean_token_accuracy": 0.6779321998357772,
229
+ "num_tokens": 3567707.0,
230
+ "step": 220
231
+ },
232
+ {
233
+ "entropy": 1.515150335431099,
234
+ "epoch": 0.02348556403645368,
235
+ "grad_norm": 0.1552734375,
236
+ "learning_rate": 0.00015578231292517008,
237
+ "loss": 1.5094,
238
+ "mean_token_accuracy": 0.6437131322920322,
239
+ "num_tokens": 3730573.0,
240
+ "step": 230
241
+ },
242
+ {
243
+ "entropy": 1.4917580775916577,
244
+ "epoch": 0.02450667551629949,
245
+ "grad_norm": 0.134765625,
246
+ "learning_rate": 0.00016258503401360547,
247
+ "loss": 1.485,
248
+ "mean_token_accuracy": 0.6469886351376772,
249
+ "num_tokens": 3892070.0,
250
+ "step": 240
251
+ },
252
+ {
253
+ "entropy": 1.4536812901496887,
254
+ "epoch": 0.025527786996145305,
255
+ "grad_norm": 0.177734375,
256
+ "learning_rate": 0.00016938775510204083,
257
+ "loss": 1.4576,
258
+ "mean_token_accuracy": 0.653927194327116,
259
+ "num_tokens": 4054840.0,
260
+ "step": 250
261
+ },
262
+ {
263
+ "entropy": 1.4987610191106797,
264
+ "epoch": 0.026548898475991116,
265
+ "grad_norm": 0.1337890625,
266
+ "learning_rate": 0.0001761904761904762,
267
+ "loss": 1.4915,
268
+ "mean_token_accuracy": 0.6461767859756946,
269
+ "num_tokens": 4216819.0,
270
+ "step": 260
271
+ },
272
+ {
273
+ "entropy": 1.4735413741320371,
274
+ "epoch": 0.02757000995583693,
275
+ "grad_norm": 0.134765625,
276
+ "learning_rate": 0.00018299319727891158,
277
+ "loss": 1.4761,
278
+ "mean_token_accuracy": 0.6535991318523884,
279
+ "num_tokens": 4379456.0,
280
+ "step": 270
281
+ },
282
+ {
283
+ "entropy": 1.3969263426959515,
284
+ "epoch": 0.02859112143568274,
285
+ "grad_norm": 0.134765625,
286
+ "learning_rate": 0.00018979591836734697,
287
+ "loss": 1.3899,
288
+ "mean_token_accuracy": 0.6731207601726055,
289
+ "num_tokens": 4542210.0,
290
+ "step": 280
291
+ },
292
+ {
293
+ "entropy": 1.445039076730609,
294
+ "epoch": 0.029612232915528554,
295
+ "grad_norm": 0.1533203125,
296
+ "learning_rate": 0.00019659863945578233,
297
+ "loss": 1.4444,
298
+ "mean_token_accuracy": 0.6599366627633572,
299
+ "num_tokens": 4704714.0,
300
+ "step": 290
301
+ },
302
+ {
303
+ "entropy": 1.5279732409864664,
304
+ "epoch": 0.030633344395374365,
305
+ "grad_norm": 0.134765625,
306
+ "learning_rate": 0.00019999986330190926,
307
+ "loss": 1.5272,
308
+ "mean_token_accuracy": 0.6418842054903507,
309
+ "num_tokens": 4867296.0,
310
+ "step": 300
311
+ },
312
+ {
313
+ "entropy": 1.4492794305086136,
314
+ "epoch": 0.03165445587522018,
315
+ "grad_norm": 0.12353515625,
316
+ "learning_rate": 0.00019999876971942557,
317
+ "loss": 1.4412,
318
+ "mean_token_accuracy": 0.6598815761506558,
319
+ "num_tokens": 5029281.0,
320
+ "step": 310
321
+ },
322
+ {
323
+ "entropy": 1.4836263785604387,
324
+ "epoch": 0.032675567355065986,
325
+ "grad_norm": 0.1357421875,
326
+ "learning_rate": 0.00019999658256641747,
327
+ "loss": 1.4862,
328
+ "mean_token_accuracy": 0.6475081637501716,
329
+ "num_tokens": 5191743.0,
330
+ "step": 320
331
+ },
332
+ {
333
+ "entropy": 1.3623248231597245,
334
+ "epoch": 0.0336966788349118,
335
+ "grad_norm": 0.1376953125,
336
+ "learning_rate": 0.0001999933018668033,
337
+ "loss": 1.3616,
338
+ "mean_token_accuracy": 0.6757434003055096,
339
+ "num_tokens": 5353922.0,
340
+ "step": 330
341
+ },
342
+ {
343
+ "entropy": 1.5218863390386104,
344
+ "epoch": 0.034717790314757614,
345
+ "grad_norm": 0.1240234375,
346
+ "learning_rate": 0.00019998892765646026,
347
+ "loss": 1.5125,
348
+ "mean_token_accuracy": 0.6421493288129568,
349
+ "num_tokens": 5517243.0,
350
+ "step": 340
351
+ },
352
+ {
353
+ "entropy": 1.36881191637367,
354
+ "epoch": 0.03573890179460343,
355
+ "grad_norm": 0.1455078125,
356
+ "learning_rate": 0.00019998345998322397,
357
+ "loss": 1.3681,
358
+ "mean_token_accuracy": 0.6754211783409119,
359
+ "num_tokens": 5679935.0,
360
+ "step": 350
361
+ },
362
+ {
363
+ "entropy": 1.4223904270678758,
364
+ "epoch": 0.036760013274449235,
365
+ "grad_norm": 0.11376953125,
366
+ "learning_rate": 0.0001999768989068881,
367
+ "loss": 1.4221,
368
+ "mean_token_accuracy": 0.6603884272277355,
369
+ "num_tokens": 5842369.0,
370
+ "step": 360
371
+ },
372
+ {
373
+ "entropy": 1.5084102761000395,
374
+ "epoch": 0.03778112475429505,
375
+ "grad_norm": 0.1298828125,
376
+ "learning_rate": 0.0001999692444992035,
377
+ "loss": 1.5102,
378
+ "mean_token_accuracy": 0.646609878540039,
379
+ "num_tokens": 6005332.0,
380
+ "step": 370
381
+ },
382
+ {
383
+ "entropy": 1.4550730671733618,
384
+ "epoch": 0.03880223623414086,
385
+ "grad_norm": 0.12451171875,
386
+ "learning_rate": 0.0001999604968438775,
387
+ "loss": 1.4526,
388
+ "mean_token_accuracy": 0.6567724391818046,
389
+ "num_tokens": 6166980.0,
390
+ "step": 380
391
+ },
392
+ {
393
+ "entropy": 1.4567001653369516,
394
+ "epoch": 0.03982334771398668,
395
+ "grad_norm": 0.1279296875,
396
+ "learning_rate": 0.00019995065603657316,
397
+ "loss": 1.4563,
398
+ "mean_token_accuracy": 0.6559679053723813,
399
+ "num_tokens": 6330051.0,
400
+ "step": 390
401
+ },
402
+ {
403
+ "entropy": 1.5023609794676305,
404
+ "epoch": 0.040844459193832484,
405
+ "grad_norm": 0.12890625,
406
+ "learning_rate": 0.0001999397221849079,
407
+ "loss": 1.5048,
408
+ "mean_token_accuracy": 0.6437392018735408,
409
+ "num_tokens": 6492367.0,
410
+ "step": 400
411
+ },
412
+ {
413
+ "entropy": 1.41582164731808,
414
+ "epoch": 0.0418655706736783,
415
+ "grad_norm": 0.150390625,
416
+ "learning_rate": 0.00019992769540845258,
417
+ "loss": 1.4111,
418
+ "mean_token_accuracy": 0.6669952549040318,
419
+ "num_tokens": 6655166.0,
420
+ "step": 410
421
+ },
422
+ {
423
+ "entropy": 1.4339285803493111,
424
+ "epoch": 0.04288668215352411,
425
+ "grad_norm": 0.134765625,
426
+ "learning_rate": 0.0001999145758387301,
427
+ "loss": 1.437,
428
+ "mean_token_accuracy": 0.6595155119895935,
429
+ "num_tokens": 6818039.0,
430
+ "step": 420
431
+ },
432
+ {
433
+ "entropy": 1.5205090701114385,
434
+ "epoch": 0.043907793633369926,
435
+ "grad_norm": 0.140625,
436
+ "learning_rate": 0.000199900363619214,
437
+ "loss": 1.5194,
438
+ "mean_token_accuracy": 0.6407137364149094,
439
+ "num_tokens": 6980469.0,
440
+ "step": 430
441
+ },
442
+ {
443
+ "entropy": 1.4766571710584686,
444
+ "epoch": 0.044928905113215734,
445
+ "grad_norm": 0.1328125,
446
+ "learning_rate": 0.0001998850589053268,
447
+ "loss": 1.4812,
448
+ "mean_token_accuracy": 0.6553035363554954,
449
+ "num_tokens": 7142333.0,
450
+ "step": 440
451
+ },
452
+ {
453
+ "entropy": 1.4795013278722764,
454
+ "epoch": 0.04595001659306155,
455
+ "grad_norm": 0.12158203125,
456
+ "learning_rate": 0.0001998686618644384,
457
+ "loss": 1.473,
458
+ "mean_token_accuracy": 0.6515781451016665,
459
+ "num_tokens": 7305134.0,
460
+ "step": 450
461
+ },
462
+ {
463
+ "entropy": 1.3903306159889326,
464
+ "epoch": 0.04697112807290736,
465
+ "grad_norm": 0.1103515625,
466
+ "learning_rate": 0.00019985117267586424,
467
+ "loss": 1.3943,
468
+ "mean_token_accuracy": 0.668827860429883,
469
+ "num_tokens": 7467704.0,
470
+ "step": 460
471
+ },
472
+ {
473
+ "entropy": 1.4713697545230389,
474
+ "epoch": 0.04799223955275317,
475
+ "grad_norm": 0.1220703125,
476
+ "learning_rate": 0.00019983259153086327,
477
+ "loss": 1.4719,
478
+ "mean_token_accuracy": 0.6516519948840142,
479
+ "num_tokens": 7630816.0,
480
+ "step": 470
481
+ },
482
+ {
483
+ "entropy": 1.4951297044754028,
484
+ "epoch": 0.04901335103259898,
485
+ "grad_norm": 0.130859375,
486
+ "learning_rate": 0.00019981291863263592,
487
+ "loss": 1.4938,
488
+ "mean_token_accuracy": 0.6480146907269955,
489
+ "num_tokens": 7792599.0,
490
+ "step": 480
491
+ },
492
+ {
493
+ "entropy": 1.472099182009697,
494
+ "epoch": 0.0500344625124448,
495
+ "grad_norm": 0.140625,
496
+ "learning_rate": 0.00019979215419632182,
497
+ "loss": 1.4723,
498
+ "mean_token_accuracy": 0.6526057504117488,
499
+ "num_tokens": 7954570.0,
500
+ "step": 490
501
+ },
502
+ {
503
+ "entropy": 1.4787833941634745,
504
+ "epoch": 0.05105557399229061,
505
+ "grad_norm": 0.126953125,
506
+ "learning_rate": 0.00019977029844899758,
507
+ "loss": 1.4879,
508
+ "mean_token_accuracy": 0.6501187555491924,
509
+ "num_tokens": 8117325.0,
510
+ "step": 500
511
+ },
512
+ {
513
+ "epoch": 0.05105557399229061,
514
+ "eval_entropy": 1.4797989258202173,
515
+ "eval_loss": 1.4792004823684692,
516
+ "eval_mean_token_accuracy": 0.6508541675451408,
517
+ "eval_num_tokens": 8117325.0,
518
+ "eval_runtime": 1529.4932,
519
+ "eval_samples_per_second": 4.375,
520
+ "eval_steps_per_second": 0.547,
521
+ "step": 500
522
+ }
523
+ ],
524
+ "logging_steps": 10,
525
+ "max_steps": 9794,
526
+ "num_input_tokens_seen": 0,
527
+ "num_train_epochs": 1,
528
+ "save_steps": 500,
529
+ "stateful_callbacks": {
530
+ "TrainerControl": {
531
+ "args": {
532
+ "should_epoch_stop": false,
533
+ "should_evaluate": false,
534
+ "should_log": false,
535
+ "should_save": true,
536
+ "should_training_stop": false
537
+ },
538
+ "attributes": {}
539
+ }
540
+ },
541
+ "total_flos": 3.463332679021824e+17,
542
+ "train_batch_size": 1,
543
+ "trial_name": null,
544
+ "trial_params": null
545
+ }
last-checkpoint/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e62ef06abb82806b9e7ff1b890a683f1fa00f3b0b16a7e2313d7041ad48d9dd
3
+ size 6289
last-checkpoint/vocab.json ADDED
The diff for this file is too large to render. See raw diff