RayDu0010 commited on
Commit
abf5b62
·
verified ·
1 Parent(s): 6fa32c6

Upload folder using huggingface_hub

Browse files
instruct/26_128_e3_3e-5/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ibm-granite/granite-3.3-8b-instruct
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.15.2
instruct/26_128_e3_3e-5/adapter_config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "ibm-granite/granite-3.3-8b-instruct",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 256,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.05,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "r": 128,
24
+ "rank_pattern": {},
25
+ "revision": null,
26
+ "target_modules": [
27
+ "v_proj",
28
+ "down_proj",
29
+ "o_proj",
30
+ "gate_proj",
31
+ "up_proj",
32
+ "q_proj",
33
+ "k_proj"
34
+ ],
35
+ "task_type": "CAUSAL_LM",
36
+ "trainable_token_indices": null,
37
+ "use_dora": false,
38
+ "use_rslora": false
39
+ }
instruct/26_128_e3_3e-5/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d8e4b65a99dbd6a47242269cd44623f20a0b20d0e3c4a0c6f84ac21853f9f9e
3
+ size 791751704
instruct/26_128_e3_3e-5/added_tokens.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "<|end_of_cite|>": 49156,
3
+ "<|end_of_plugin|>": 49158,
4
+ "<|end_of_role|>": 49153,
5
+ "<|start_of_cite|>": 49155,
6
+ "<|start_of_plugin|>": 49157,
7
+ "<|start_of_role|>": 49152,
8
+ "<|tool_call|>": 49154
9
+ }
instruct/26_128_e3_3e-5/chat_template.jinja ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {# Alias tools -> available_tools #}
2
+ {%- if tools and not available_tools -%}
3
+ {%- set available_tools = tools -%}
4
+ {%- endif -%}
5
+ {%- if messages[0]['role'] == 'system' %}
6
+ {%- set system_message = messages[0]['content'] %}
7
+ {%- set loop_messages = messages[1:] %}
8
+ {%- else %}
9
+ {%- set system_message = "Knowledge Cutoff Date: April 2024.
10
+ Today's Date: " + strftime_now('%B %d, %Y') + ".
11
+ You are Granite, developed by IBM." %}
12
+ {%- if available_tools and documents %}
13
+ {%- set system_message = system_message + " You are a helpful assistant with access to the following tools. When a tool is required to answer the user's query, respond only with <|tool_call|> followed by a JSON list of tools used. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request.
14
+ Write the response to the user's input by strictly aligning with the facts in the provided documents. If the information needed to answer the question is not available in the documents, inform the user that the question cannot be answered based on the available data." %}
15
+ {%- elif available_tools %}
16
+ {%- set system_message = system_message + " You are a helpful assistant with access to the following tools. When a tool is required to answer the user's query, respond only with <|tool_call|> followed by a JSON list of tools used. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request." %}
17
+ {%- elif documents %}
18
+ {%- set system_message = system_message + " Write the response to the user's input by strictly aligning with the facts in the provided documents. If the information needed to answer the question is not available in the documents, inform the user that the question cannot be answered based on the available data." %}
19
+ {%- elif thinking %}
20
+ {%- set system_message = system_message + " You are a helpful AI assistant.
21
+ Respond to every user query in a comprehensive and detailed way. You can write down your thoughts and reasoning process before responding. In the thought process, engage in a comprehensive cycle of analysis, summarization, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. In the response section, based on various attempts, explorations, and reflections from the thoughts section, systematically present the final solution that you deem correct. The response should summarize the thought process. Write your thoughts between <think></think> and write your response between <response></response> for each user query." %}
22
+ {%- else %}
23
+ {%- set system_message = system_message + " You are a helpful AI assistant." %}
24
+ {%- endif %}
25
+ {%- if 'citations' in controls and documents %}
26
+ {%- set system_message = system_message + '
27
+ Use the symbols <|start_of_cite|> and <|end_of_cite|> to indicate when a fact comes from a document in the search result, e.g <|start_of_cite|> {document_id: 1}my fact <|end_of_cite|> for a fact from document 1. Afterwards, list all the citations with their corresponding documents in an ordered list.' %}
28
+ {%- endif %}
29
+ {%- if 'hallucinations' in controls and documents %}
30
+ {%- set system_message = system_message + '
31
+ Finally, after the response is written, include a numbered list of sentences from the response with a corresponding risk value that are hallucinated and not based in the documents.' %}
32
+ {%- endif %}
33
+ {%- set loop_messages = messages %}
34
+ {%- endif %}
35
+ {{- '<|start_of_role|>system<|end_of_role|>' + system_message + '<|end_of_text|>
36
+ ' }}
37
+ {%- if available_tools %}
38
+ {{- '<|start_of_role|>available_tools<|end_of_role|>' }}
39
+ {{- available_tools | tojson(indent=4) }}
40
+ {{- '<|end_of_text|>
41
+ ' }}
42
+ {%- endif %}
43
+ {%- if documents %}
44
+ {%- for document in documents %}
45
+ {{- '<|start_of_role|>document {"document_id": "' + document['doc_id'] | string + '"}<|end_of_role|>
46
+ ' }}
47
+ {{- document['text'] }}
48
+ {{- '<|end_of_text|>
49
+ ' }}
50
+ {%- endfor %}
51
+ {%- endif %}
52
+ {%- for message in loop_messages %}
53
+ {{- '<|start_of_role|>' + message['role'] + '<|end_of_role|>' + message['content'] + '<|end_of_text|>
54
+ ' }}
55
+ {%- if loop.last and add_generation_prompt %}
56
+ {{- '<|start_of_role|>assistant' }}
57
+ {%- if controls %}
58
+ {{- ' ' + controls | tojson()}}
59
+ {%- endif %}
60
+ {{- '<|end_of_role|>' }}
61
+ {%- endif %}
62
+ {%- endfor %}
instruct/26_128_e3_3e-5/latest ADDED
@@ -0,0 +1 @@
 
 
1
+ global_step1326
instruct/26_128_e3_3e-5/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
instruct/26_128_e3_3e-5/rng_state_0.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e171497e72c6776a6f5961fc272308bde36cdf1dd6135fb103262cb441431f7
3
+ size 16325
instruct/26_128_e3_3e-5/rng_state_1.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f3ecec6c685212f60510964a7344bb6ede275b202de12946f7c4778dbc2770d
3
+ size 16325
instruct/26_128_e3_3e-5/rng_state_2.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e45d57aed2f459f8e3d84f97d238ab37257c458d682705fc78059eafc10f7e4
3
+ size 16325
instruct/26_128_e3_3e-5/rng_state_3.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05c5322bc4c810e5a16f5aa9b1766c2acea3ea445de6d5085b600eafd381f7c6
3
+ size 16325
instruct/26_128_e3_3e-5/rng_state_4.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:783b68efb10fd19dbdae90a1effff17ed0aad08808256cbcb86481e968dbf183
3
+ size 16325
instruct/26_128_e3_3e-5/rng_state_5.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8b7285024f9cdd372690acd8302eae01f39323b771df71be6d64549ed83be15
3
+ size 16325
instruct/26_128_e3_3e-5/rng_state_6.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5330459305a06b7e66bd470182e97ebe3f355f0dee3ae6cea711cf5a6f1fe27f
3
+ size 16325
instruct/26_128_e3_3e-5/rng_state_7.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d78ea365415d6cbc9cd5f59ac9dfbca92d7742886f0a0e757b5b58dcb33731ce
3
+ size 16325
instruct/26_128_e3_3e-5/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33b3e97f410e238a1fc1d44e0e20061515325c114fd81030a4f1a47367d79467
3
+ size 1401
instruct/26_128_e3_3e-5/special_tokens_map.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|start_of_role|>",
4
+ "<|end_of_role|>",
5
+ "<|tool_call|>",
6
+ "<|start_of_cite|>",
7
+ "<|end_of_cite|>",
8
+ "<|start_of_plugin|>",
9
+ "<|end_of_plugin|>"
10
+ ],
11
+ "bos_token": {
12
+ "content": "<|end_of_text|>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "eos_token": {
19
+ "content": "<|end_of_text|>",
20
+ "lstrip": false,
21
+ "normalized": false,
22
+ "rstrip": false,
23
+ "single_word": false
24
+ },
25
+ "pad_token": "<|end_of_plugin|>",
26
+ "unk_token": {
27
+ "content": "<|end_of_text|>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
instruct/26_128_e3_3e-5/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
instruct/26_128_e3_3e-5/tokenizer_config.json ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<|end_of_text|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<fim_prefix>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "<fim_middle>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "3": {
30
+ "content": "<fim_suffix>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "4": {
38
+ "content": "<fim_pad>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "5": {
46
+ "content": "<filename>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "6": {
54
+ "content": "<gh_stars>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "7": {
62
+ "content": "<issue_start>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "8": {
70
+ "content": "<issue_comment>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "9": {
78
+ "content": "<issue_closed>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "10": {
86
+ "content": "<jupyter_start>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "11": {
94
+ "content": "<jupyter_text>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "12": {
102
+ "content": "<jupyter_code>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "13": {
110
+ "content": "<jupyter_output>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "14": {
118
+ "content": "<empty_output>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": true
124
+ },
125
+ "15": {
126
+ "content": "<commit_before>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": true
132
+ },
133
+ "16": {
134
+ "content": "<commit_msg>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": true
140
+ },
141
+ "17": {
142
+ "content": "<commit_after>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": true
148
+ },
149
+ "18": {
150
+ "content": "<reponame>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": true
156
+ },
157
+ "49152": {
158
+ "content": "<|start_of_role|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": true
164
+ },
165
+ "49153": {
166
+ "content": "<|end_of_role|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": true
172
+ },
173
+ "49154": {
174
+ "content": "<|tool_call|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": true
180
+ },
181
+ "49155": {
182
+ "content": "<|start_of_cite|>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": true
188
+ },
189
+ "49156": {
190
+ "content": "<|end_of_cite|>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": true
196
+ },
197
+ "49157": {
198
+ "content": "<|start_of_plugin|>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": true
204
+ },
205
+ "49158": {
206
+ "content": "<|end_of_plugin|>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": true
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|start_of_role|>",
216
+ "<|end_of_role|>",
217
+ "<|tool_call|>",
218
+ "<|start_of_cite|>",
219
+ "<|end_of_cite|>",
220
+ "<|start_of_plugin|>",
221
+ "<|end_of_plugin|>"
222
+ ],
223
+ "bos_token": "<|end_of_text|>",
224
+ "clean_up_tokenization_spaces": true,
225
+ "eos_token": "<|end_of_text|>",
226
+ "errors": "replace",
227
+ "extra_special_tokens": {},
228
+ "model_max_length": 8192,
229
+ "pad_token": "<|end_of_plugin|>",
230
+ "padding_side": "left",
231
+ "tokenizer_class": "GPT2Tokenizer",
232
+ "unk_token": "<|end_of_text|>",
233
+ "vocab_size": 49152
234
+ }
instruct/26_128_e3_3e-5/trainer_state.json ADDED
@@ -0,0 +1,1889 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 3.0,
6
+ "eval_steps": 500,
7
+ "global_step": 1326,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.011312217194570135,
14
+ "grad_norm": 2.0912926197052,
15
+ "learning_rate": 1.791044776119403e-06,
16
+ "loss": 1.4605,
17
+ "step": 5
18
+ },
19
+ {
20
+ "epoch": 0.02262443438914027,
21
+ "grad_norm": 1.7910131216049194,
22
+ "learning_rate": 4.029850746268657e-06,
23
+ "loss": 1.4287,
24
+ "step": 10
25
+ },
26
+ {
27
+ "epoch": 0.033936651583710405,
28
+ "grad_norm": 0.9113977551460266,
29
+ "learning_rate": 6.268656716417911e-06,
30
+ "loss": 1.4269,
31
+ "step": 15
32
+ },
33
+ {
34
+ "epoch": 0.04524886877828054,
35
+ "grad_norm": 0.4947521686553955,
36
+ "learning_rate": 8.507462686567164e-06,
37
+ "loss": 1.3302,
38
+ "step": 20
39
+ },
40
+ {
41
+ "epoch": 0.05656108597285068,
42
+ "grad_norm": 0.42500633001327515,
43
+ "learning_rate": 1.0746268656716418e-05,
44
+ "loss": 1.3409,
45
+ "step": 25
46
+ },
47
+ {
48
+ "epoch": 0.06787330316742081,
49
+ "grad_norm": 0.43518832325935364,
50
+ "learning_rate": 1.2985074626865672e-05,
51
+ "loss": 1.3376,
52
+ "step": 30
53
+ },
54
+ {
55
+ "epoch": 0.07918552036199095,
56
+ "grad_norm": 0.3675978183746338,
57
+ "learning_rate": 1.5223880597014927e-05,
58
+ "loss": 1.326,
59
+ "step": 35
60
+ },
61
+ {
62
+ "epoch": 0.09049773755656108,
63
+ "grad_norm": 0.33139318227767944,
64
+ "learning_rate": 1.746268656716418e-05,
65
+ "loss": 1.2887,
66
+ "step": 40
67
+ },
68
+ {
69
+ "epoch": 0.10180995475113122,
70
+ "grad_norm": 0.35655292868614197,
71
+ "learning_rate": 1.9701492537313435e-05,
72
+ "loss": 1.3649,
73
+ "step": 45
74
+ },
75
+ {
76
+ "epoch": 0.11312217194570136,
77
+ "grad_norm": 0.3566742539405823,
78
+ "learning_rate": 2.194029850746269e-05,
79
+ "loss": 1.2637,
80
+ "step": 50
81
+ },
82
+ {
83
+ "epoch": 0.1244343891402715,
84
+ "grad_norm": 0.3607560694217682,
85
+ "learning_rate": 2.417910447761194e-05,
86
+ "loss": 1.2777,
87
+ "step": 55
88
+ },
89
+ {
90
+ "epoch": 0.13574660633484162,
91
+ "grad_norm": 0.3742026388645172,
92
+ "learning_rate": 2.6417910447761193e-05,
93
+ "loss": 1.2015,
94
+ "step": 60
95
+ },
96
+ {
97
+ "epoch": 0.14705882352941177,
98
+ "grad_norm": 0.3841893970966339,
99
+ "learning_rate": 2.8656716417910447e-05,
100
+ "loss": 1.2172,
101
+ "step": 65
102
+ },
103
+ {
104
+ "epoch": 0.1583710407239819,
105
+ "grad_norm": 0.4321725070476532,
106
+ "learning_rate": 2.9999813203541335e-05,
107
+ "loss": 1.1819,
108
+ "step": 70
109
+ },
110
+ {
111
+ "epoch": 0.16968325791855204,
112
+ "grad_norm": 0.39428895711898804,
113
+ "learning_rate": 2.9997711796810774e-05,
114
+ "loss": 1.1501,
115
+ "step": 75
116
+ },
117
+ {
118
+ "epoch": 0.18099547511312217,
119
+ "grad_norm": 0.3923000991344452,
120
+ "learning_rate": 2.9993275815975943e-05,
121
+ "loss": 1.1363,
122
+ "step": 80
123
+ },
124
+ {
125
+ "epoch": 0.19230769230769232,
126
+ "grad_norm": 0.4313000440597534,
127
+ "learning_rate": 2.9986505951550574e-05,
128
+ "loss": 1.1217,
129
+ "step": 85
130
+ },
131
+ {
132
+ "epoch": 0.20361990950226244,
133
+ "grad_norm": 0.46248558163642883,
134
+ "learning_rate": 2.9977403257345435e-05,
135
+ "loss": 1.1264,
136
+ "step": 90
137
+ },
138
+ {
139
+ "epoch": 0.2149321266968326,
140
+ "grad_norm": 0.44976806640625,
141
+ "learning_rate": 2.996596915030429e-05,
142
+ "loss": 1.0583,
143
+ "step": 95
144
+ },
145
+ {
146
+ "epoch": 0.22624434389140272,
147
+ "grad_norm": 0.4600139260292053,
148
+ "learning_rate": 2.995220541028333e-05,
149
+ "loss": 1.0922,
150
+ "step": 100
151
+ },
152
+ {
153
+ "epoch": 0.23755656108597284,
154
+ "grad_norm": 0.5614279508590698,
155
+ "learning_rate": 2.9936114179774118e-05,
156
+ "loss": 1.0561,
157
+ "step": 105
158
+ },
159
+ {
160
+ "epoch": 0.248868778280543,
161
+ "grad_norm": 0.5335162281990051,
162
+ "learning_rate": 2.991769796357009e-05,
163
+ "loss": 1.0022,
164
+ "step": 110
165
+ },
166
+ {
167
+ "epoch": 0.26018099547511314,
168
+ "grad_norm": 0.5181005001068115,
169
+ "learning_rate": 2.9896959628376653e-05,
170
+ "loss": 0.9937,
171
+ "step": 115
172
+ },
173
+ {
174
+ "epoch": 0.27149321266968324,
175
+ "grad_norm": 0.572638213634491,
176
+ "learning_rate": 2.987390240236494e-05,
177
+ "loss": 0.9866,
178
+ "step": 120
179
+ },
180
+ {
181
+ "epoch": 0.2828054298642534,
182
+ "grad_norm": 0.542224645614624,
183
+ "learning_rate": 2.984852987466931e-05,
184
+ "loss": 0.99,
185
+ "step": 125
186
+ },
187
+ {
188
+ "epoch": 0.29411764705882354,
189
+ "grad_norm": 0.5979598164558411,
190
+ "learning_rate": 2.982084599482867e-05,
191
+ "loss": 0.9641,
192
+ "step": 130
193
+ },
194
+ {
195
+ "epoch": 0.3054298642533937,
196
+ "grad_norm": 0.6587913632392883,
197
+ "learning_rate": 2.979085507217165e-05,
198
+ "loss": 0.9706,
199
+ "step": 135
200
+ },
201
+ {
202
+ "epoch": 0.3167420814479638,
203
+ "grad_norm": 0.6122245192527771,
204
+ "learning_rate": 2.9758561775145837e-05,
205
+ "loss": 0.9134,
206
+ "step": 140
207
+ },
208
+ {
209
+ "epoch": 0.32805429864253394,
210
+ "grad_norm": 0.6558977961540222,
211
+ "learning_rate": 2.9723971130591053e-05,
212
+ "loss": 0.8639,
213
+ "step": 145
214
+ },
215
+ {
216
+ "epoch": 0.3393665158371041,
217
+ "grad_norm": 0.6984571218490601,
218
+ "learning_rate": 2.9687088522956894e-05,
219
+ "loss": 0.8991,
220
+ "step": 150
221
+ },
222
+ {
223
+ "epoch": 0.3506787330316742,
224
+ "grad_norm": 0.6824187636375427,
225
+ "learning_rate": 2.9647919693464532e-05,
226
+ "loss": 0.8535,
227
+ "step": 155
228
+ },
229
+ {
230
+ "epoch": 0.36199095022624433,
231
+ "grad_norm": 0.7138025760650635,
232
+ "learning_rate": 2.9606470739213066e-05,
233
+ "loss": 0.9199,
234
+ "step": 160
235
+ },
236
+ {
237
+ "epoch": 0.3733031674208145,
238
+ "grad_norm": 0.7853896617889404,
239
+ "learning_rate": 2.956274811223042e-05,
240
+ "loss": 0.7881,
241
+ "step": 165
242
+ },
243
+ {
244
+ "epoch": 0.38461538461538464,
245
+ "grad_norm": 0.6674976348876953,
246
+ "learning_rate": 2.9516758618468994e-05,
247
+ "loss": 0.8497,
248
+ "step": 170
249
+ },
250
+ {
251
+ "epoch": 0.39592760180995473,
252
+ "grad_norm": 0.90818852186203,
253
+ "learning_rate": 2.9468509416746267e-05,
254
+ "loss": 0.7894,
255
+ "step": 175
256
+ },
257
+ {
258
+ "epoch": 0.4072398190045249,
259
+ "grad_norm": 0.7600741386413574,
260
+ "learning_rate": 2.9418008017630402e-05,
261
+ "loss": 0.7598,
262
+ "step": 180
263
+ },
264
+ {
265
+ "epoch": 0.41855203619909503,
266
+ "grad_norm": 0.9290881156921387,
267
+ "learning_rate": 2.9365262282271173e-05,
268
+ "loss": 0.7946,
269
+ "step": 185
270
+ },
271
+ {
272
+ "epoch": 0.4298642533936652,
273
+ "grad_norm": 0.761840283870697,
274
+ "learning_rate": 2.9310280421176255e-05,
275
+ "loss": 0.8197,
276
+ "step": 190
277
+ },
278
+ {
279
+ "epoch": 0.4411764705882353,
280
+ "grad_norm": 0.9088016748428345,
281
+ "learning_rate": 2.925307099293318e-05,
282
+ "loss": 0.7404,
283
+ "step": 195
284
+ },
285
+ {
286
+ "epoch": 0.45248868778280543,
287
+ "grad_norm": 0.7919666767120361,
288
+ "learning_rate": 2.9193642902877077e-05,
289
+ "loss": 0.7365,
290
+ "step": 200
291
+ },
292
+ {
293
+ "epoch": 0.4638009049773756,
294
+ "grad_norm": 0.8283450603485107,
295
+ "learning_rate": 2.9132005401704442e-05,
296
+ "loss": 0.7685,
297
+ "step": 205
298
+ },
299
+ {
300
+ "epoch": 0.4751131221719457,
301
+ "grad_norm": 0.7633830904960632,
302
+ "learning_rate": 2.906816808403319e-05,
303
+ "loss": 0.7842,
304
+ "step": 210
305
+ },
306
+ {
307
+ "epoch": 0.48642533936651583,
308
+ "grad_norm": 0.8434284925460815,
309
+ "learning_rate": 2.9002140886909087e-05,
310
+ "loss": 0.6578,
311
+ "step": 215
312
+ },
313
+ {
314
+ "epoch": 0.497737556561086,
315
+ "grad_norm": 0.8744111657142639,
316
+ "learning_rate": 2.893393408825898e-05,
317
+ "loss": 0.6381,
318
+ "step": 220
319
+ },
320
+ {
321
+ "epoch": 0.5090497737556561,
322
+ "grad_norm": 0.9379422068595886,
323
+ "learning_rate": 2.886355830529088e-05,
324
+ "loss": 0.6643,
325
+ "step": 225
326
+ },
327
+ {
328
+ "epoch": 0.5203619909502263,
329
+ "grad_norm": 0.949584424495697,
330
+ "learning_rate": 2.8791024492841274e-05,
331
+ "loss": 0.6735,
332
+ "step": 230
333
+ },
334
+ {
335
+ "epoch": 0.5316742081447964,
336
+ "grad_norm": 1.0218664407730103,
337
+ "learning_rate": 2.8716343941669888e-05,
338
+ "loss": 0.644,
339
+ "step": 235
340
+ },
341
+ {
342
+ "epoch": 0.5429864253393665,
343
+ "grad_norm": 0.9621919989585876,
344
+ "learning_rate": 2.863952827670212e-05,
345
+ "loss": 0.6985,
346
+ "step": 240
347
+ },
348
+ {
349
+ "epoch": 0.5542986425339367,
350
+ "grad_norm": 1.0091310739517212,
351
+ "learning_rate": 2.8560589455219503e-05,
352
+ "loss": 0.5793,
353
+ "step": 245
354
+ },
355
+ {
356
+ "epoch": 0.5656108597285068,
357
+ "grad_norm": 0.957200825214386,
358
+ "learning_rate": 2.8479539764998393e-05,
359
+ "loss": 0.6588,
360
+ "step": 250
361
+ },
362
+ {
363
+ "epoch": 0.5769230769230769,
364
+ "grad_norm": 1.1031014919281006,
365
+ "learning_rate": 2.8396391822397238e-05,
366
+ "loss": 0.6396,
367
+ "step": 255
368
+ },
369
+ {
370
+ "epoch": 0.5882352941176471,
371
+ "grad_norm": 0.9956181645393372,
372
+ "learning_rate": 2.8311158570392694e-05,
373
+ "loss": 0.5824,
374
+ "step": 260
375
+ },
376
+ {
377
+ "epoch": 0.5995475113122172,
378
+ "grad_norm": 0.9678698182106018,
379
+ "learning_rate": 2.822385327656488e-05,
380
+ "loss": 0.5908,
381
+ "step": 265
382
+ },
383
+ {
384
+ "epoch": 0.6108597285067874,
385
+ "grad_norm": 0.9828930497169495,
386
+ "learning_rate": 2.8134489531032144e-05,
387
+ "loss": 0.5699,
388
+ "step": 270
389
+ },
390
+ {
391
+ "epoch": 0.6221719457013575,
392
+ "grad_norm": 0.9715862274169922,
393
+ "learning_rate": 2.804308124433557e-05,
394
+ "loss": 0.6081,
395
+ "step": 275
396
+ },
397
+ {
398
+ "epoch": 0.6334841628959276,
399
+ "grad_norm": 1.006686806678772,
400
+ "learning_rate": 2.794964264527365e-05,
401
+ "loss": 0.5383,
402
+ "step": 280
403
+ },
404
+ {
405
+ "epoch": 0.6447963800904978,
406
+ "grad_norm": 0.990519106388092,
407
+ "learning_rate": 2.78541882786874e-05,
408
+ "loss": 0.5828,
409
+ "step": 285
410
+ },
411
+ {
412
+ "epoch": 0.6561085972850679,
413
+ "grad_norm": 0.9568750262260437,
414
+ "learning_rate": 2.7756733003196287e-05,
415
+ "loss": 0.5805,
416
+ "step": 290
417
+ },
418
+ {
419
+ "epoch": 0.667420814479638,
420
+ "grad_norm": 0.9401103854179382,
421
+ "learning_rate": 2.765729198888529e-05,
422
+ "loss": 0.5703,
423
+ "step": 295
424
+ },
425
+ {
426
+ "epoch": 0.6787330316742082,
427
+ "grad_norm": 1.061906337738037,
428
+ "learning_rate": 2.7555880714943506e-05,
429
+ "loss": 0.5606,
430
+ "step": 300
431
+ },
432
+ {
433
+ "epoch": 0.6900452488687783,
434
+ "grad_norm": 1.091336727142334,
435
+ "learning_rate": 2.745251496725462e-05,
436
+ "loss": 0.5362,
437
+ "step": 305
438
+ },
439
+ {
440
+ "epoch": 0.7013574660633484,
441
+ "grad_norm": 1.0048516988754272,
442
+ "learning_rate": 2.7347210835939657e-05,
443
+ "loss": 0.5174,
444
+ "step": 310
445
+ },
446
+ {
447
+ "epoch": 0.7126696832579186,
448
+ "grad_norm": 1.072752594947815,
449
+ "learning_rate": 2.7239984712852344e-05,
450
+ "loss": 0.533,
451
+ "step": 315
452
+ },
453
+ {
454
+ "epoch": 0.7239819004524887,
455
+ "grad_norm": 0.9952567219734192,
456
+ "learning_rate": 2.7130853289027526e-05,
457
+ "loss": 0.5424,
458
+ "step": 320
459
+ },
460
+ {
461
+ "epoch": 0.7352941176470589,
462
+ "grad_norm": 0.9514249563217163,
463
+ "learning_rate": 2.7019833552083016e-05,
464
+ "loss": 0.4976,
465
+ "step": 325
466
+ },
467
+ {
468
+ "epoch": 0.746606334841629,
469
+ "grad_norm": 0.9675179719924927,
470
+ "learning_rate": 2.6906942783575258e-05,
471
+ "loss": 0.488,
472
+ "step": 330
473
+ },
474
+ {
475
+ "epoch": 0.7579185520361991,
476
+ "grad_norm": 1.1137731075286865,
477
+ "learning_rate": 2.679219855630925e-05,
478
+ "loss": 0.4723,
479
+ "step": 335
480
+ },
481
+ {
482
+ "epoch": 0.7692307692307693,
483
+ "grad_norm": 0.9949238896369934,
484
+ "learning_rate": 2.6675618731603107e-05,
485
+ "loss": 0.4572,
486
+ "step": 340
487
+ },
488
+ {
489
+ "epoch": 0.7805429864253394,
490
+ "grad_norm": 1.0572351217269897,
491
+ "learning_rate": 2.6557221456507775e-05,
492
+ "loss": 0.4586,
493
+ "step": 345
494
+ },
495
+ {
496
+ "epoch": 0.7918552036199095,
497
+ "grad_norm": 0.9844952821731567,
498
+ "learning_rate": 2.643702516098218e-05,
499
+ "loss": 0.5,
500
+ "step": 350
501
+ },
502
+ {
503
+ "epoch": 0.8031674208144797,
504
+ "grad_norm": 1.023716688156128,
505
+ "learning_rate": 2.6315048555024396e-05,
506
+ "loss": 0.4584,
507
+ "step": 355
508
+ },
509
+ {
510
+ "epoch": 0.8144796380090498,
511
+ "grad_norm": 1.109480857849121,
512
+ "learning_rate": 2.6191310625759232e-05,
513
+ "loss": 0.4768,
514
+ "step": 360
515
+ },
516
+ {
517
+ "epoch": 0.8257918552036199,
518
+ "grad_norm": 1.0938293933868408,
519
+ "learning_rate": 2.6065830634482625e-05,
520
+ "loss": 0.4536,
521
+ "step": 365
522
+ },
523
+ {
524
+ "epoch": 0.8371040723981901,
525
+ "grad_norm": 1.0279819965362549,
526
+ "learning_rate": 2.5938628113663415e-05,
527
+ "loss": 0.408,
528
+ "step": 370
529
+ },
530
+ {
531
+ "epoch": 0.8484162895927602,
532
+ "grad_norm": 1.1595271825790405,
533
+ "learning_rate": 2.5809722863902857e-05,
534
+ "loss": 0.4498,
535
+ "step": 375
536
+ },
537
+ {
538
+ "epoch": 0.8597285067873304,
539
+ "grad_norm": 1.0043530464172363,
540
+ "learning_rate": 2.567913495085244e-05,
541
+ "loss": 0.3761,
542
+ "step": 380
543
+ },
544
+ {
545
+ "epoch": 0.8710407239819005,
546
+ "grad_norm": 1.1296460628509521,
547
+ "learning_rate": 2.554688470209041e-05,
548
+ "loss": 0.4506,
549
+ "step": 385
550
+ },
551
+ {
552
+ "epoch": 0.8823529411764706,
553
+ "grad_norm": 1.0064283609390259,
554
+ "learning_rate": 2.5412992703957556e-05,
555
+ "loss": 0.4057,
556
+ "step": 390
557
+ },
558
+ {
559
+ "epoch": 0.8936651583710408,
560
+ "grad_norm": 1.2273569107055664,
561
+ "learning_rate": 2.5277479798352682e-05,
562
+ "loss": 0.3504,
563
+ "step": 395
564
+ },
565
+ {
566
+ "epoch": 0.9049773755656109,
567
+ "grad_norm": 0.9904837608337402,
568
+ "learning_rate": 2.514036707948833e-05,
569
+ "loss": 0.3769,
570
+ "step": 400
571
+ },
572
+ {
573
+ "epoch": 0.916289592760181,
574
+ "grad_norm": 1.0377897024154663,
575
+ "learning_rate": 2.5001675890607195e-05,
576
+ "loss": 0.3822,
577
+ "step": 405
578
+ },
579
+ {
580
+ "epoch": 0.9276018099547512,
581
+ "grad_norm": 1.0362907648086548,
582
+ "learning_rate": 2.4861427820659813e-05,
583
+ "loss": 0.4422,
584
+ "step": 410
585
+ },
586
+ {
587
+ "epoch": 0.9389140271493213,
588
+ "grad_norm": 1.167650580406189,
589
+ "learning_rate": 2.471964470094396e-05,
590
+ "loss": 0.3556,
591
+ "step": 415
592
+ },
593
+ {
594
+ "epoch": 0.9502262443438914,
595
+ "grad_norm": 1.0661512613296509,
596
+ "learning_rate": 2.4576348601706366e-05,
597
+ "loss": 0.4195,
598
+ "step": 420
599
+ },
600
+ {
601
+ "epoch": 0.9615384615384616,
602
+ "grad_norm": 1.0818893909454346,
603
+ "learning_rate": 2.4431561828707208e-05,
604
+ "loss": 0.3946,
605
+ "step": 425
606
+ },
607
+ {
608
+ "epoch": 0.9728506787330317,
609
+ "grad_norm": 1.1853892803192139,
610
+ "learning_rate": 2.428530691974795e-05,
611
+ "loss": 0.3808,
612
+ "step": 430
613
+ },
614
+ {
615
+ "epoch": 0.9841628959276018,
616
+ "grad_norm": 1.039306402206421,
617
+ "learning_rate": 2.4137606641163064e-05,
618
+ "loss": 0.3995,
619
+ "step": 435
620
+ },
621
+ {
622
+ "epoch": 0.995475113122172,
623
+ "grad_norm": 1.1625009775161743,
624
+ "learning_rate": 2.3988483984276174e-05,
625
+ "loss": 0.3418,
626
+ "step": 440
627
+ },
628
+ {
629
+ "epoch": 1.006787330316742,
630
+ "grad_norm": 1.0703306198120117,
631
+ "learning_rate": 2.3837962161821183e-05,
632
+ "loss": 0.3746,
633
+ "step": 445
634
+ },
635
+ {
636
+ "epoch": 1.0180995475113122,
637
+ "grad_norm": 1.224336862564087,
638
+ "learning_rate": 2.368606460432894e-05,
639
+ "loss": 0.276,
640
+ "step": 450
641
+ },
642
+ {
643
+ "epoch": 1.0294117647058822,
644
+ "grad_norm": 1.0999438762664795,
645
+ "learning_rate": 2.353281495647998e-05,
646
+ "loss": 0.339,
647
+ "step": 455
648
+ },
649
+ {
650
+ "epoch": 1.0407239819004526,
651
+ "grad_norm": 1.1343170404434204,
652
+ "learning_rate": 2.3378237073423957e-05,
653
+ "loss": 0.3284,
654
+ "step": 460
655
+ },
656
+ {
657
+ "epoch": 1.0520361990950227,
658
+ "grad_norm": 1.0221738815307617,
659
+ "learning_rate": 2.322235501706629e-05,
660
+ "loss": 0.2856,
661
+ "step": 465
662
+ },
663
+ {
664
+ "epoch": 1.0633484162895928,
665
+ "grad_norm": 1.1489653587341309,
666
+ "learning_rate": 2.3065193052322667e-05,
667
+ "loss": 0.2856,
668
+ "step": 470
669
+ },
670
+ {
671
+ "epoch": 1.0746606334841629,
672
+ "grad_norm": 1.109666109085083,
673
+ "learning_rate": 2.2906775643341883e-05,
674
+ "loss": 0.3059,
675
+ "step": 475
676
+ },
677
+ {
678
+ "epoch": 1.085972850678733,
679
+ "grad_norm": 1.4277920722961426,
680
+ "learning_rate": 2.274712744969772e-05,
681
+ "loss": 0.3227,
682
+ "step": 480
683
+ },
684
+ {
685
+ "epoch": 1.0972850678733033,
686
+ "grad_norm": 1.0936839580535889,
687
+ "learning_rate": 2.2586273322550404e-05,
688
+ "loss": 0.299,
689
+ "step": 485
690
+ },
691
+ {
692
+ "epoch": 1.1085972850678734,
693
+ "grad_norm": 0.9595632553100586,
694
+ "learning_rate": 2.2424238300778176e-05,
695
+ "loss": 0.2735,
696
+ "step": 490
697
+ },
698
+ {
699
+ "epoch": 1.1199095022624435,
700
+ "grad_norm": 0.9666547775268555,
701
+ "learning_rate": 2.226104760707974e-05,
702
+ "loss": 0.3001,
703
+ "step": 495
704
+ },
705
+ {
706
+ "epoch": 1.1312217194570136,
707
+ "grad_norm": 1.0554572343826294,
708
+ "learning_rate": 2.2096726644048016e-05,
709
+ "loss": 0.2673,
710
+ "step": 500
711
+ },
712
+ {
713
+ "epoch": 1.1425339366515836,
714
+ "grad_norm": 1.1831492185592651,
715
+ "learning_rate": 2.1931300990215943e-05,
716
+ "loss": 0.2567,
717
+ "step": 505
718
+ },
719
+ {
720
+ "epoch": 1.1538461538461537,
721
+ "grad_norm": 1.0285236835479736,
722
+ "learning_rate": 2.176479639607485e-05,
723
+ "loss": 0.2869,
724
+ "step": 510
725
+ },
726
+ {
727
+ "epoch": 1.165158371040724,
728
+ "grad_norm": 1.0996110439300537,
729
+ "learning_rate": 2.159723878006609e-05,
730
+ "loss": 0.2431,
731
+ "step": 515
732
+ },
733
+ {
734
+ "epoch": 1.1764705882352942,
735
+ "grad_norm": 1.2502623796463013,
736
+ "learning_rate": 2.142865422454654e-05,
737
+ "loss": 0.2799,
738
+ "step": 520
739
+ },
740
+ {
741
+ "epoch": 1.1877828054298643,
742
+ "grad_norm": 1.1434515714645386,
743
+ "learning_rate": 2.1259068971728547e-05,
744
+ "loss": 0.2511,
745
+ "step": 525
746
+ },
747
+ {
748
+ "epoch": 1.1990950226244343,
749
+ "grad_norm": 1.0576070547103882,
750
+ "learning_rate": 2.1088509419595007e-05,
751
+ "loss": 0.2454,
752
+ "step": 530
753
+ },
754
+ {
755
+ "epoch": 1.2104072398190044,
756
+ "grad_norm": 1.1324952840805054,
757
+ "learning_rate": 2.0917002117790247e-05,
758
+ "loss": 0.2721,
759
+ "step": 535
760
+ },
761
+ {
762
+ "epoch": 1.2217194570135748,
763
+ "grad_norm": 0.9946959614753723,
764
+ "learning_rate": 2.0744573763487195e-05,
765
+ "loss": 0.2441,
766
+ "step": 540
767
+ },
768
+ {
769
+ "epoch": 1.2330316742081449,
770
+ "grad_norm": 1.074367880821228,
771
+ "learning_rate": 2.057125119723168e-05,
772
+ "loss": 0.2553,
773
+ "step": 545
774
+ },
775
+ {
776
+ "epoch": 1.244343891402715,
777
+ "grad_norm": 1.0800132751464844,
778
+ "learning_rate": 2.0397061398764367e-05,
779
+ "loss": 0.2352,
780
+ "step": 550
781
+ },
782
+ {
783
+ "epoch": 1.255656108597285,
784
+ "grad_norm": 1.1620769500732422,
785
+ "learning_rate": 2.0222031482821033e-05,
786
+ "loss": 0.2393,
787
+ "step": 555
788
+ },
789
+ {
790
+ "epoch": 1.2669683257918551,
791
+ "grad_norm": 1.0169572830200195,
792
+ "learning_rate": 2.004618869491186e-05,
793
+ "loss": 0.2284,
794
+ "step": 560
795
+ },
796
+ {
797
+ "epoch": 1.2782805429864252,
798
+ "grad_norm": 1.1703081130981445,
799
+ "learning_rate": 1.9869560407080295e-05,
800
+ "loss": 0.2247,
801
+ "step": 565
802
+ },
803
+ {
804
+ "epoch": 1.2895927601809956,
805
+ "grad_norm": 1.1945836544036865,
806
+ "learning_rate": 1.9692174113642307e-05,
807
+ "loss": 0.2399,
808
+ "step": 570
809
+ },
810
+ {
811
+ "epoch": 1.3009049773755657,
812
+ "grad_norm": 1.2127693891525269,
813
+ "learning_rate": 1.9514057426906536e-05,
814
+ "loss": 0.2307,
815
+ "step": 575
816
+ },
817
+ {
818
+ "epoch": 1.3122171945701357,
819
+ "grad_norm": 1.2538022994995117,
820
+ "learning_rate": 1.933523807287612e-05,
821
+ "loss": 0.2236,
822
+ "step": 580
823
+ },
824
+ {
825
+ "epoch": 1.3235294117647058,
826
+ "grad_norm": 1.0086054801940918,
827
+ "learning_rate": 1.9155743886932825e-05,
828
+ "loss": 0.2342,
829
+ "step": 585
830
+ },
831
+ {
832
+ "epoch": 1.334841628959276,
833
+ "grad_norm": 1.0805895328521729,
834
+ "learning_rate": 1.8975602809504086e-05,
835
+ "loss": 0.2301,
836
+ "step": 590
837
+ },
838
+ {
839
+ "epoch": 1.3461538461538463,
840
+ "grad_norm": 1.1413661241531372,
841
+ "learning_rate": 1.8794842881713793e-05,
842
+ "loss": 0.2222,
843
+ "step": 595
844
+ },
845
+ {
846
+ "epoch": 1.3574660633484164,
847
+ "grad_norm": 1.0291798114776611,
848
+ "learning_rate": 1.861349224101733e-05,
849
+ "loss": 0.2176,
850
+ "step": 600
851
+ },
852
+ {
853
+ "epoch": 1.3687782805429864,
854
+ "grad_norm": 1.2109662294387817,
855
+ "learning_rate": 1.8431579116821643e-05,
856
+ "loss": 0.2065,
857
+ "step": 605
858
+ },
859
+ {
860
+ "epoch": 1.3800904977375565,
861
+ "grad_norm": 1.1848989725112915,
862
+ "learning_rate": 1.824913182609099e-05,
863
+ "loss": 0.2021,
864
+ "step": 610
865
+ },
866
+ {
867
+ "epoch": 1.3914027149321266,
868
+ "grad_norm": 1.0305408239364624,
869
+ "learning_rate": 1.806617876893907e-05,
870
+ "loss": 0.217,
871
+ "step": 615
872
+ },
873
+ {
874
+ "epoch": 1.4027149321266967,
875
+ "grad_norm": 1.0627455711364746,
876
+ "learning_rate": 1.7882748424208227e-05,
877
+ "loss": 0.2013,
878
+ "step": 620
879
+ },
880
+ {
881
+ "epoch": 1.4140271493212668,
882
+ "grad_norm": 1.097080945968628,
883
+ "learning_rate": 1.7698869345036323e-05,
884
+ "loss": 0.207,
885
+ "step": 625
886
+ },
887
+ {
888
+ "epoch": 1.4253393665158371,
889
+ "grad_norm": 1.036790370941162,
890
+ "learning_rate": 1.7514570154412146e-05,
891
+ "loss": 0.2132,
892
+ "step": 630
893
+ },
894
+ {
895
+ "epoch": 1.4366515837104072,
896
+ "grad_norm": 1.318554401397705,
897
+ "learning_rate": 1.7329879540719878e-05,
898
+ "loss": 0.2031,
899
+ "step": 635
900
+ },
901
+ {
902
+ "epoch": 1.4479638009049773,
903
+ "grad_norm": 1.0702239274978638,
904
+ "learning_rate": 1.7144826253273405e-05,
905
+ "loss": 0.2288,
906
+ "step": 640
907
+ },
908
+ {
909
+ "epoch": 1.4592760180995474,
910
+ "grad_norm": 0.9932087063789368,
911
+ "learning_rate": 1.6959439097841134e-05,
912
+ "loss": 0.1927,
913
+ "step": 645
914
+ },
915
+ {
916
+ "epoch": 1.4705882352941178,
917
+ "grad_norm": 0.9612034559249878,
918
+ "learning_rate": 1.6773746932162063e-05,
919
+ "loss": 0.1906,
920
+ "step": 650
921
+ },
922
+ {
923
+ "epoch": 1.4819004524886878,
924
+ "grad_norm": 0.9303610324859619,
925
+ "learning_rate": 1.6587778661453674e-05,
926
+ "loss": 0.2128,
927
+ "step": 655
928
+ },
929
+ {
930
+ "epoch": 1.493212669683258,
931
+ "grad_norm": 0.9586461186408997,
932
+ "learning_rate": 1.6401563233912527e-05,
933
+ "loss": 0.1658,
934
+ "step": 660
935
+ },
936
+ {
937
+ "epoch": 1.504524886877828,
938
+ "grad_norm": 1.118667721748352,
939
+ "learning_rate": 1.6215129636208106e-05,
940
+ "loss": 0.197,
941
+ "step": 665
942
+ },
943
+ {
944
+ "epoch": 1.5158371040723981,
945
+ "grad_norm": 1.0532863140106201,
946
+ "learning_rate": 1.6028506888970708e-05,
947
+ "loss": 0.1949,
948
+ "step": 670
949
+ },
950
+ {
951
+ "epoch": 1.5271493212669682,
952
+ "grad_norm": 1.091910719871521,
953
+ "learning_rate": 1.584172404227404e-05,
954
+ "loss": 0.1639,
955
+ "step": 675
956
+ },
957
+ {
958
+ "epoch": 1.5384615384615383,
959
+ "grad_norm": 1.082388997077942,
960
+ "learning_rate": 1.5654810171113197e-05,
961
+ "loss": 0.1806,
962
+ "step": 680
963
+ },
964
+ {
965
+ "epoch": 1.5497737556561086,
966
+ "grad_norm": 0.9903051853179932,
967
+ "learning_rate": 1.546779437087881e-05,
968
+ "loss": 0.1664,
969
+ "step": 685
970
+ },
971
+ {
972
+ "epoch": 1.5610859728506787,
973
+ "grad_norm": 1.2020766735076904,
974
+ "learning_rate": 1.5280705752828e-05,
975
+ "loss": 0.1714,
976
+ "step": 690
977
+ },
978
+ {
979
+ "epoch": 1.5723981900452488,
980
+ "grad_norm": 0.9573895335197449,
981
+ "learning_rate": 1.5093573439552856e-05,
982
+ "loss": 0.1521,
983
+ "step": 695
984
+ },
985
+ {
986
+ "epoch": 1.5837104072398192,
987
+ "grad_norm": 1.006414771080017,
988
+ "learning_rate": 1.4906426560447147e-05,
989
+ "loss": 0.1891,
990
+ "step": 700
991
+ },
992
+ {
993
+ "epoch": 1.5950226244343892,
994
+ "grad_norm": 1.053931713104248,
995
+ "learning_rate": 1.4719294247172007e-05,
996
+ "loss": 0.1689,
997
+ "step": 705
998
+ },
999
+ {
1000
+ "epoch": 1.6063348416289593,
1001
+ "grad_norm": 1.025380253791809,
1002
+ "learning_rate": 1.4532205629121196e-05,
1003
+ "loss": 0.1735,
1004
+ "step": 710
1005
+ },
1006
+ {
1007
+ "epoch": 1.6176470588235294,
1008
+ "grad_norm": 1.0261629819869995,
1009
+ "learning_rate": 1.4345189828886806e-05,
1010
+ "loss": 0.16,
1011
+ "step": 715
1012
+ },
1013
+ {
1014
+ "epoch": 1.6289592760180995,
1015
+ "grad_norm": 0.9993324279785156,
1016
+ "learning_rate": 1.4158275957725964e-05,
1017
+ "loss": 0.1559,
1018
+ "step": 720
1019
+ },
1020
+ {
1021
+ "epoch": 1.6402714932126696,
1022
+ "grad_norm": 0.8579344153404236,
1023
+ "learning_rate": 1.3971493111029293e-05,
1024
+ "loss": 0.1645,
1025
+ "step": 725
1026
+ },
1027
+ {
1028
+ "epoch": 1.6515837104072397,
1029
+ "grad_norm": 1.0291427373886108,
1030
+ "learning_rate": 1.3784870363791903e-05,
1031
+ "loss": 0.1795,
1032
+ "step": 730
1033
+ },
1034
+ {
1035
+ "epoch": 1.6628959276018098,
1036
+ "grad_norm": 0.979483425617218,
1037
+ "learning_rate": 1.3598436766087479e-05,
1038
+ "loss": 0.1348,
1039
+ "step": 735
1040
+ },
1041
+ {
1042
+ "epoch": 1.6742081447963801,
1043
+ "grad_norm": 1.1019129753112793,
1044
+ "learning_rate": 1.341222133854633e-05,
1045
+ "loss": 0.1524,
1046
+ "step": 740
1047
+ },
1048
+ {
1049
+ "epoch": 1.6855203619909502,
1050
+ "grad_norm": 0.9728627800941467,
1051
+ "learning_rate": 1.322625306783794e-05,
1052
+ "loss": 0.1746,
1053
+ "step": 745
1054
+ },
1055
+ {
1056
+ "epoch": 1.6968325791855203,
1057
+ "grad_norm": 1.0457085371017456,
1058
+ "learning_rate": 1.3040560902158862e-05,
1059
+ "loss": 0.1409,
1060
+ "step": 750
1061
+ },
1062
+ {
1063
+ "epoch": 1.7081447963800906,
1064
+ "grad_norm": 1.043779730796814,
1065
+ "learning_rate": 1.2855173746726602e-05,
1066
+ "loss": 0.1468,
1067
+ "step": 755
1068
+ },
1069
+ {
1070
+ "epoch": 1.7194570135746607,
1071
+ "grad_norm": 1.1534290313720703,
1072
+ "learning_rate": 1.2670120459280128e-05,
1073
+ "loss": 0.1485,
1074
+ "step": 760
1075
+ },
1076
+ {
1077
+ "epoch": 1.7307692307692308,
1078
+ "grad_norm": 1.142231822013855,
1079
+ "learning_rate": 1.2485429845587862e-05,
1080
+ "loss": 0.1657,
1081
+ "step": 765
1082
+ },
1083
+ {
1084
+ "epoch": 1.742081447963801,
1085
+ "grad_norm": 0.970547616481781,
1086
+ "learning_rate": 1.230113065496368e-05,
1087
+ "loss": 0.1461,
1088
+ "step": 770
1089
+ },
1090
+ {
1091
+ "epoch": 1.753393665158371,
1092
+ "grad_norm": 1.1664538383483887,
1093
+ "learning_rate": 1.2117251575791775e-05,
1094
+ "loss": 0.1573,
1095
+ "step": 775
1096
+ },
1097
+ {
1098
+ "epoch": 1.7647058823529411,
1099
+ "grad_norm": 1.163459062576294,
1100
+ "learning_rate": 1.1933821231060932e-05,
1101
+ "loss": 0.1433,
1102
+ "step": 780
1103
+ },
1104
+ {
1105
+ "epoch": 1.7760180995475112,
1106
+ "grad_norm": 1.04987370967865,
1107
+ "learning_rate": 1.1750868173909014e-05,
1108
+ "loss": 0.1516,
1109
+ "step": 785
1110
+ },
1111
+ {
1112
+ "epoch": 1.7873303167420813,
1113
+ "grad_norm": 0.9961551427841187,
1114
+ "learning_rate": 1.1568420883178363e-05,
1115
+ "loss": 0.1108,
1116
+ "step": 790
1117
+ },
1118
+ {
1119
+ "epoch": 1.7986425339366516,
1120
+ "grad_norm": 1.1734639406204224,
1121
+ "learning_rate": 1.1386507758982672e-05,
1122
+ "loss": 0.1537,
1123
+ "step": 795
1124
+ },
1125
+ {
1126
+ "epoch": 1.8099547511312217,
1127
+ "grad_norm": 0.9961851835250854,
1128
+ "learning_rate": 1.1205157118286203e-05,
1129
+ "loss": 0.1611,
1130
+ "step": 800
1131
+ },
1132
+ {
1133
+ "epoch": 1.8212669683257918,
1134
+ "grad_norm": 0.9870208501815796,
1135
+ "learning_rate": 1.1024397190495915e-05,
1136
+ "loss": 0.1463,
1137
+ "step": 805
1138
+ },
1139
+ {
1140
+ "epoch": 1.8325791855203621,
1141
+ "grad_norm": 1.024949073791504,
1142
+ "learning_rate": 1.0844256113067177e-05,
1143
+ "loss": 0.121,
1144
+ "step": 810
1145
+ },
1146
+ {
1147
+ "epoch": 1.8438914027149322,
1148
+ "grad_norm": 0.8817871809005737,
1149
+ "learning_rate": 1.0664761927123882e-05,
1150
+ "loss": 0.1153,
1151
+ "step": 815
1152
+ },
1153
+ {
1154
+ "epoch": 1.8552036199095023,
1155
+ "grad_norm": 0.9961386322975159,
1156
+ "learning_rate": 1.0485942573093468e-05,
1157
+ "loss": 0.1223,
1158
+ "step": 820
1159
+ },
1160
+ {
1161
+ "epoch": 1.8665158371040724,
1162
+ "grad_norm": 0.9679275155067444,
1163
+ "learning_rate": 1.0307825886357697e-05,
1164
+ "loss": 0.1093,
1165
+ "step": 825
1166
+ },
1167
+ {
1168
+ "epoch": 1.8778280542986425,
1169
+ "grad_norm": 0.9038271307945251,
1170
+ "learning_rate": 1.0130439592919706e-05,
1171
+ "loss": 0.1123,
1172
+ "step": 830
1173
+ },
1174
+ {
1175
+ "epoch": 1.8891402714932126,
1176
+ "grad_norm": 1.0145912170410156,
1177
+ "learning_rate": 9.953811305088142e-06,
1178
+ "loss": 0.1186,
1179
+ "step": 835
1180
+ },
1181
+ {
1182
+ "epoch": 1.9004524886877827,
1183
+ "grad_norm": 0.7669517993927002,
1184
+ "learning_rate": 9.777968517178967e-06,
1185
+ "loss": 0.1006,
1186
+ "step": 840
1187
+ },
1188
+ {
1189
+ "epoch": 1.9117647058823528,
1190
+ "grad_norm": 0.8728093504905701,
1191
+ "learning_rate": 9.60293860123564e-06,
1192
+ "loss": 0.1111,
1193
+ "step": 845
1194
+ },
1195
+ {
1196
+ "epoch": 1.9230769230769231,
1197
+ "grad_norm": 0.7810572385787964,
1198
+ "learning_rate": 9.428748802768328e-06,
1199
+ "loss": 0.1129,
1200
+ "step": 850
1201
+ },
1202
+ {
1203
+ "epoch": 1.9343891402714932,
1204
+ "grad_norm": 0.8908131122589111,
1205
+ "learning_rate": 9.25542623651281e-06,
1206
+ "loss": 0.1313,
1207
+ "step": 855
1208
+ },
1209
+ {
1210
+ "epoch": 1.9457013574660633,
1211
+ "grad_norm": 1.1132276058197021,
1212
+ "learning_rate": 9.082997882209754e-06,
1213
+ "loss": 0.1236,
1214
+ "step": 860
1215
+ },
1216
+ {
1217
+ "epoch": 1.9570135746606336,
1218
+ "grad_norm": 1.0585702657699585,
1219
+ "learning_rate": 8.911490580404996e-06,
1220
+ "loss": 0.1163,
1221
+ "step": 865
1222
+ },
1223
+ {
1224
+ "epoch": 1.9683257918552037,
1225
+ "grad_norm": 1.0346885919570923,
1226
+ "learning_rate": 8.740931028271462e-06,
1227
+ "loss": 0.1019,
1228
+ "step": 870
1229
+ },
1230
+ {
1231
+ "epoch": 1.9796380090497738,
1232
+ "grad_norm": 0.9383977651596069,
1233
+ "learning_rate": 8.571345775453468e-06,
1234
+ "loss": 0.1052,
1235
+ "step": 875
1236
+ },
1237
+ {
1238
+ "epoch": 1.990950226244344,
1239
+ "grad_norm": 1.0390316247940063,
1240
+ "learning_rate": 8.402761219933911e-06,
1241
+ "loss": 0.0939,
1242
+ "step": 880
1243
+ },
1244
+ {
1245
+ "epoch": 2.002262443438914,
1246
+ "grad_norm": 0.9927393198013306,
1247
+ "learning_rate": 8.23520360392515e-06,
1248
+ "loss": 0.1119,
1249
+ "step": 885
1250
+ },
1251
+ {
1252
+ "epoch": 2.013574660633484,
1253
+ "grad_norm": 0.8094943165779114,
1254
+ "learning_rate": 8.068699009784057e-06,
1255
+ "loss": 0.0932,
1256
+ "step": 890
1257
+ },
1258
+ {
1259
+ "epoch": 2.024886877828054,
1260
+ "grad_norm": 0.9272624254226685,
1261
+ "learning_rate": 7.90327335595198e-06,
1262
+ "loss": 0.0903,
1263
+ "step": 895
1264
+ },
1265
+ {
1266
+ "epoch": 2.0361990950226243,
1267
+ "grad_norm": 0.7852720618247986,
1268
+ "learning_rate": 7.738952392920262e-06,
1269
+ "loss": 0.0907,
1270
+ "step": 900
1271
+ },
1272
+ {
1273
+ "epoch": 2.0475113122171944,
1274
+ "grad_norm": 0.8599071502685547,
1275
+ "learning_rate": 7.575761699221828e-06,
1276
+ "loss": 0.0859,
1277
+ "step": 905
1278
+ },
1279
+ {
1280
+ "epoch": 2.0588235294117645,
1281
+ "grad_norm": 0.7176490426063538,
1282
+ "learning_rate": 7.413726677449603e-06,
1283
+ "loss": 0.0789,
1284
+ "step": 910
1285
+ },
1286
+ {
1287
+ "epoch": 2.070135746606335,
1288
+ "grad_norm": 0.7863131761550903,
1289
+ "learning_rate": 7.252872550302278e-06,
1290
+ "loss": 0.0825,
1291
+ "step": 915
1292
+ },
1293
+ {
1294
+ "epoch": 2.081447963800905,
1295
+ "grad_norm": 0.9784367084503174,
1296
+ "learning_rate": 7.093224356658117e-06,
1297
+ "loss": 0.0821,
1298
+ "step": 920
1299
+ },
1300
+ {
1301
+ "epoch": 2.0927601809954752,
1302
+ "grad_norm": 0.7447630763053894,
1303
+ "learning_rate": 6.934806947677335e-06,
1304
+ "loss": 0.0738,
1305
+ "step": 925
1306
+ },
1307
+ {
1308
+ "epoch": 2.1040723981900453,
1309
+ "grad_norm": 0.7353556752204895,
1310
+ "learning_rate": 6.7776449829337065e-06,
1311
+ "loss": 0.0773,
1312
+ "step": 930
1313
+ },
1314
+ {
1315
+ "epoch": 2.1153846153846154,
1316
+ "grad_norm": 0.6796473860740662,
1317
+ "learning_rate": 6.621762926576046e-06,
1318
+ "loss": 0.0799,
1319
+ "step": 935
1320
+ },
1321
+ {
1322
+ "epoch": 2.1266968325791855,
1323
+ "grad_norm": 0.691005527973175,
1324
+ "learning_rate": 6.467185043520024e-06,
1325
+ "loss": 0.0833,
1326
+ "step": 940
1327
+ },
1328
+ {
1329
+ "epoch": 2.1380090497737556,
1330
+ "grad_norm": 0.8786527514457703,
1331
+ "learning_rate": 6.313935395671061e-06,
1332
+ "loss": 0.0729,
1333
+ "step": 945
1334
+ },
1335
+ {
1336
+ "epoch": 2.1493212669683257,
1337
+ "grad_norm": 0.9278671741485596,
1338
+ "learning_rate": 6.162037838178821e-06,
1339
+ "loss": 0.0826,
1340
+ "step": 950
1341
+ },
1342
+ {
1343
+ "epoch": 2.160633484162896,
1344
+ "grad_norm": 0.7139242887496948,
1345
+ "learning_rate": 6.01151601572383e-06,
1346
+ "loss": 0.0699,
1347
+ "step": 955
1348
+ },
1349
+ {
1350
+ "epoch": 2.171945701357466,
1351
+ "grad_norm": 0.6887299418449402,
1352
+ "learning_rate": 5.86239335883694e-06,
1353
+ "loss": 0.0701,
1354
+ "step": 960
1355
+ },
1356
+ {
1357
+ "epoch": 2.183257918552036,
1358
+ "grad_norm": 0.6852268576622009,
1359
+ "learning_rate": 5.71469308025205e-06,
1360
+ "loss": 0.0741,
1361
+ "step": 965
1362
+ },
1363
+ {
1364
+ "epoch": 2.1945701357466065,
1365
+ "grad_norm": 0.9389444589614868,
1366
+ "learning_rate": 5.568438171292794e-06,
1367
+ "loss": 0.0684,
1368
+ "step": 970
1369
+ },
1370
+ {
1371
+ "epoch": 2.2058823529411766,
1372
+ "grad_norm": 0.7542385458946228,
1373
+ "learning_rate": 5.4236513982936396e-06,
1374
+ "loss": 0.0754,
1375
+ "step": 975
1376
+ },
1377
+ {
1378
+ "epoch": 2.2171945701357467,
1379
+ "grad_norm": 0.6936740875244141,
1380
+ "learning_rate": 5.280355299056043e-06,
1381
+ "loss": 0.071,
1382
+ "step": 980
1383
+ },
1384
+ {
1385
+ "epoch": 2.228506787330317,
1386
+ "grad_norm": 0.8076568841934204,
1387
+ "learning_rate": 5.138572179340193e-06,
1388
+ "loss": 0.0668,
1389
+ "step": 985
1390
+ },
1391
+ {
1392
+ "epoch": 2.239819004524887,
1393
+ "grad_norm": 0.9253296256065369,
1394
+ "learning_rate": 4.998324109392807e-06,
1395
+ "loss": 0.0811,
1396
+ "step": 990
1397
+ },
1398
+ {
1399
+ "epoch": 2.251131221719457,
1400
+ "grad_norm": 0.761478066444397,
1401
+ "learning_rate": 4.859632920511675e-06,
1402
+ "loss": 0.0664,
1403
+ "step": 995
1404
+ },
1405
+ {
1406
+ "epoch": 2.262443438914027,
1407
+ "grad_norm": 0.7460895776748657,
1408
+ "learning_rate": 4.7225202016473195e-06,
1409
+ "loss": 0.0763,
1410
+ "step": 1000
1411
+ },
1412
+ {
1413
+ "epoch": 2.273755656108597,
1414
+ "grad_norm": 0.7208731174468994,
1415
+ "learning_rate": 4.587007296042448e-06,
1416
+ "loss": 0.0728,
1417
+ "step": 1005
1418
+ },
1419
+ {
1420
+ "epoch": 2.2850678733031673,
1421
+ "grad_norm": 0.9119418859481812,
1422
+ "learning_rate": 4.453115297909595e-06,
1423
+ "loss": 0.0627,
1424
+ "step": 1010
1425
+ },
1426
+ {
1427
+ "epoch": 2.2963800904977374,
1428
+ "grad_norm": 0.8187896609306335,
1429
+ "learning_rate": 4.320865049147563e-06,
1430
+ "loss": 0.0764,
1431
+ "step": 1015
1432
+ },
1433
+ {
1434
+ "epoch": 2.3076923076923075,
1435
+ "grad_norm": 0.7876666188240051,
1436
+ "learning_rate": 4.190277136097146e-06,
1437
+ "loss": 0.0571,
1438
+ "step": 1020
1439
+ },
1440
+ {
1441
+ "epoch": 2.3190045248868776,
1442
+ "grad_norm": 0.7433528304100037,
1443
+ "learning_rate": 4.061371886336584e-06,
1444
+ "loss": 0.0675,
1445
+ "step": 1025
1446
+ },
1447
+ {
1448
+ "epoch": 2.330316742081448,
1449
+ "grad_norm": 0.7232070565223694,
1450
+ "learning_rate": 3.93416936551737e-06,
1451
+ "loss": 0.0759,
1452
+ "step": 1030
1453
+ },
1454
+ {
1455
+ "epoch": 2.341628959276018,
1456
+ "grad_norm": 0.770138680934906,
1457
+ "learning_rate": 3.808689374240769e-06,
1458
+ "loss": 0.0804,
1459
+ "step": 1035
1460
+ },
1461
+ {
1462
+ "epoch": 2.3529411764705883,
1463
+ "grad_norm": 0.7326589226722717,
1464
+ "learning_rate": 3.684951444975608e-06,
1465
+ "loss": 0.0644,
1466
+ "step": 1040
1467
+ },
1468
+ {
1469
+ "epoch": 2.3642533936651584,
1470
+ "grad_norm": 0.6037160158157349,
1471
+ "learning_rate": 3.5629748390178295e-06,
1472
+ "loss": 0.0685,
1473
+ "step": 1045
1474
+ },
1475
+ {
1476
+ "epoch": 2.3755656108597285,
1477
+ "grad_norm": 0.7806716561317444,
1478
+ "learning_rate": 3.442778543492227e-06,
1479
+ "loss": 0.0597,
1480
+ "step": 1050
1481
+ },
1482
+ {
1483
+ "epoch": 2.3868778280542986,
1484
+ "grad_norm": 0.6830331683158875,
1485
+ "learning_rate": 3.324381268396896e-06,
1486
+ "loss": 0.0677,
1487
+ "step": 1055
1488
+ },
1489
+ {
1490
+ "epoch": 2.3981900452488687,
1491
+ "grad_norm": 0.9457846283912659,
1492
+ "learning_rate": 3.2078014436907556e-06,
1493
+ "loss": 0.0785,
1494
+ "step": 1060
1495
+ },
1496
+ {
1497
+ "epoch": 2.409502262443439,
1498
+ "grad_norm": 0.5633257627487183,
1499
+ "learning_rate": 3.0930572164247408e-06,
1500
+ "loss": 0.0574,
1501
+ "step": 1065
1502
+ },
1503
+ {
1504
+ "epoch": 2.420814479638009,
1505
+ "grad_norm": 0.7113518714904785,
1506
+ "learning_rate": 2.9801664479169845e-06,
1507
+ "loss": 0.0541,
1508
+ "step": 1070
1509
+ },
1510
+ {
1511
+ "epoch": 2.4321266968325794,
1512
+ "grad_norm": 0.6136440634727478,
1513
+ "learning_rate": 2.8691467109724777e-06,
1514
+ "loss": 0.0557,
1515
+ "step": 1075
1516
+ },
1517
+ {
1518
+ "epoch": 2.4434389140271495,
1519
+ "grad_norm": 0.629156231880188,
1520
+ "learning_rate": 2.760015287147662e-06,
1521
+ "loss": 0.0617,
1522
+ "step": 1080
1523
+ },
1524
+ {
1525
+ "epoch": 2.4547511312217196,
1526
+ "grad_norm": 0.6548119187355042,
1527
+ "learning_rate": 2.652789164060346e-06,
1528
+ "loss": 0.0817,
1529
+ "step": 1085
1530
+ },
1531
+ {
1532
+ "epoch": 2.4660633484162897,
1533
+ "grad_norm": 0.8351320624351501,
1534
+ "learning_rate": 2.5474850327453785e-06,
1535
+ "loss": 0.0818,
1536
+ "step": 1090
1537
+ },
1538
+ {
1539
+ "epoch": 2.47737556561086,
1540
+ "grad_norm": 0.5643877983093262,
1541
+ "learning_rate": 2.4441192850564962e-06,
1542
+ "loss": 0.0687,
1543
+ "step": 1095
1544
+ },
1545
+ {
1546
+ "epoch": 2.48868778280543,
1547
+ "grad_norm": 0.5930564403533936,
1548
+ "learning_rate": 2.342708011114708e-06,
1549
+ "loss": 0.0617,
1550
+ "step": 1100
1551
+ },
1552
+ {
1553
+ "epoch": 2.5,
1554
+ "grad_norm": 0.6870244741439819,
1555
+ "learning_rate": 2.243266996803712e-06,
1556
+ "loss": 0.0574,
1557
+ "step": 1105
1558
+ },
1559
+ {
1560
+ "epoch": 2.51131221719457,
1561
+ "grad_norm": 0.6774495244026184,
1562
+ "learning_rate": 2.1458117213126012e-06,
1563
+ "loss": 0.0543,
1564
+ "step": 1110
1565
+ },
1566
+ {
1567
+ "epoch": 2.52262443438914,
1568
+ "grad_norm": 0.657271683216095,
1569
+ "learning_rate": 2.0503573547263528e-06,
1570
+ "loss": 0.0617,
1571
+ "step": 1115
1572
+ },
1573
+ {
1574
+ "epoch": 2.5339366515837103,
1575
+ "grad_norm": 0.7475337386131287,
1576
+ "learning_rate": 1.9569187556644336e-06,
1577
+ "loss": 0.0642,
1578
+ "step": 1120
1579
+ },
1580
+ {
1581
+ "epoch": 2.5452488687782804,
1582
+ "grad_norm": 0.594307541847229,
1583
+ "learning_rate": 1.8655104689678555e-06,
1584
+ "loss": 0.057,
1585
+ "step": 1125
1586
+ },
1587
+ {
1588
+ "epoch": 2.5565610859728505,
1589
+ "grad_norm": 0.6223900318145752,
1590
+ "learning_rate": 1.7761467234351191e-06,
1591
+ "loss": 0.0645,
1592
+ "step": 1130
1593
+ },
1594
+ {
1595
+ "epoch": 2.5678733031674206,
1596
+ "grad_norm": 0.5344128608703613,
1597
+ "learning_rate": 1.6888414296073058e-06,
1598
+ "loss": 0.0582,
1599
+ "step": 1135
1600
+ },
1601
+ {
1602
+ "epoch": 2.579185520361991,
1603
+ "grad_norm": 0.5704408288002014,
1604
+ "learning_rate": 1.6036081776027623e-06,
1605
+ "loss": 0.0557,
1606
+ "step": 1140
1607
+ },
1608
+ {
1609
+ "epoch": 2.590497737556561,
1610
+ "grad_norm": 0.6671749353408813,
1611
+ "learning_rate": 1.52046023500161e-06,
1612
+ "loss": 0.0527,
1613
+ "step": 1145
1614
+ },
1615
+ {
1616
+ "epoch": 2.6018099547511313,
1617
+ "grad_norm": 0.6058287024497986,
1618
+ "learning_rate": 1.4394105447804994e-06,
1619
+ "loss": 0.0492,
1620
+ "step": 1150
1621
+ },
1622
+ {
1623
+ "epoch": 2.6131221719457014,
1624
+ "grad_norm": 0.48765239119529724,
1625
+ "learning_rate": 1.360471723297882e-06,
1626
+ "loss": 0.0568,
1627
+ "step": 1155
1628
+ },
1629
+ {
1630
+ "epoch": 2.6244343891402715,
1631
+ "grad_norm": 0.4897686243057251,
1632
+ "learning_rate": 1.2836560583301139e-06,
1633
+ "loss": 0.0499,
1634
+ "step": 1160
1635
+ },
1636
+ {
1637
+ "epoch": 2.6357466063348416,
1638
+ "grad_norm": 0.8854437470436096,
1639
+ "learning_rate": 1.20897550715873e-06,
1640
+ "loss": 0.0693,
1641
+ "step": 1165
1642
+ },
1643
+ {
1644
+ "epoch": 2.6470588235294117,
1645
+ "grad_norm": 0.7600730657577515,
1646
+ "learning_rate": 1.1364416947091244e-06,
1647
+ "loss": 0.0636,
1648
+ "step": 1170
1649
+ },
1650
+ {
1651
+ "epoch": 2.658371040723982,
1652
+ "grad_norm": 0.49667662382125854,
1653
+ "learning_rate": 1.066065911741021e-06,
1654
+ "loss": 0.0589,
1655
+ "step": 1175
1656
+ },
1657
+ {
1658
+ "epoch": 2.669683257918552,
1659
+ "grad_norm": 0.4326797425746918,
1660
+ "learning_rate": 9.978591130909142e-07,
1661
+ "loss": 0.0535,
1662
+ "step": 1180
1663
+ },
1664
+ {
1665
+ "epoch": 2.6809954751131224,
1666
+ "grad_norm": 0.5138698220252991,
1667
+ "learning_rate": 9.318319159668137e-07,
1668
+ "loss": 0.0499,
1669
+ "step": 1185
1670
+ },
1671
+ {
1672
+ "epoch": 2.6923076923076925,
1673
+ "grad_norm": 0.511077880859375,
1674
+ "learning_rate": 8.679945982955589e-07,
1675
+ "loss": 0.0513,
1676
+ "step": 1190
1677
+ },
1678
+ {
1679
+ "epoch": 2.7036199095022626,
1680
+ "grad_norm": 0.5522723197937012,
1681
+ "learning_rate": 8.063570971229245e-07,
1682
+ "loss": 0.0484,
1683
+ "step": 1195
1684
+ },
1685
+ {
1686
+ "epoch": 2.7149321266968327,
1687
+ "grad_norm": 0.818557620048523,
1688
+ "learning_rate": 7.469290070668189e-07,
1689
+ "loss": 0.0582,
1690
+ "step": 1200
1691
+ },
1692
+ {
1693
+ "epoch": 2.726244343891403,
1694
+ "grad_norm": 0.48831838369369507,
1695
+ "learning_rate": 6.897195788237442e-07,
1696
+ "loss": 0.0608,
1697
+ "step": 1205
1698
+ },
1699
+ {
1700
+ "epoch": 2.737556561085973,
1701
+ "grad_norm": 0.4119994640350342,
1702
+ "learning_rate": 6.347377177288283e-07,
1703
+ "loss": 0.0537,
1704
+ "step": 1210
1705
+ },
1706
+ {
1707
+ "epoch": 2.748868778280543,
1708
+ "grad_norm": 0.5442739129066467,
1709
+ "learning_rate": 5.819919823695996e-07,
1710
+ "loss": 0.0582,
1711
+ "step": 1215
1712
+ },
1713
+ {
1714
+ "epoch": 2.760180995475113,
1715
+ "grad_norm": 0.5425087809562683,
1716
+ "learning_rate": 5.31490583253737e-07,
1717
+ "loss": 0.0567,
1718
+ "step": 1220
1719
+ },
1720
+ {
1721
+ "epoch": 2.771493212669683,
1722
+ "grad_norm": 0.592939555644989,
1723
+ "learning_rate": 4.832413815310083e-07,
1724
+ "loss": 0.0622,
1725
+ "step": 1225
1726
+ },
1727
+ {
1728
+ "epoch": 2.7828054298642533,
1729
+ "grad_norm": 0.5542832016944885,
1730
+ "learning_rate": 4.3725188776958247e-07,
1731
+ "loss": 0.0585,
1732
+ "step": 1230
1733
+ },
1734
+ {
1735
+ "epoch": 2.7941176470588234,
1736
+ "grad_norm": 0.6092721819877625,
1737
+ "learning_rate": 3.935292607869334e-07,
1738
+ "loss": 0.068,
1739
+ "step": 1235
1740
+ },
1741
+ {
1742
+ "epoch": 2.8054298642533935,
1743
+ "grad_norm": 0.46700748801231384,
1744
+ "learning_rate": 3.520803065354694e-07,
1745
+ "loss": 0.0501,
1746
+ "step": 1240
1747
+ },
1748
+ {
1749
+ "epoch": 2.8167420814479636,
1750
+ "grad_norm": 0.5063658356666565,
1751
+ "learning_rate": 3.129114770431074e-07,
1752
+ "loss": 0.0544,
1753
+ "step": 1245
1754
+ },
1755
+ {
1756
+ "epoch": 2.8280542986425337,
1757
+ "grad_norm": 0.4532346725463867,
1758
+ "learning_rate": 2.7602886940894633e-07,
1759
+ "loss": 0.061,
1760
+ "step": 1250
1761
+ },
1762
+ {
1763
+ "epoch": 2.839366515837104,
1764
+ "grad_norm": 0.5276495814323425,
1765
+ "learning_rate": 2.41438224854168e-07,
1766
+ "loss": 0.0558,
1767
+ "step": 1255
1768
+ },
1769
+ {
1770
+ "epoch": 2.8506787330316743,
1771
+ "grad_norm": 0.619141697883606,
1772
+ "learning_rate": 2.0914492782835194e-07,
1773
+ "loss": 0.064,
1774
+ "step": 1260
1775
+ },
1776
+ {
1777
+ "epoch": 2.8619909502262444,
1778
+ "grad_norm": 0.5455244183540344,
1779
+ "learning_rate": 1.791540051713325e-07,
1780
+ "loss": 0.0471,
1781
+ "step": 1265
1782
+ },
1783
+ {
1784
+ "epoch": 2.8733031674208145,
1785
+ "grad_norm": 0.6298692226409912,
1786
+ "learning_rate": 1.514701253306866e-07,
1787
+ "loss": 0.0477,
1788
+ "step": 1270
1789
+ },
1790
+ {
1791
+ "epoch": 2.8846153846153846,
1792
+ "grad_norm": 0.4679732918739319,
1793
+ "learning_rate": 1.260975976350598e-07,
1794
+ "loss": 0.0621,
1795
+ "step": 1275
1796
+ },
1797
+ {
1798
+ "epoch": 2.8959276018099547,
1799
+ "grad_norm": 0.4622686207294464,
1800
+ "learning_rate": 1.0304037162334467e-07,
1801
+ "loss": 0.0613,
1802
+ "step": 1280
1803
+ },
1804
+ {
1805
+ "epoch": 2.9072398190045248,
1806
+ "grad_norm": 0.4919935464859009,
1807
+ "learning_rate": 8.23020364299093e-08,
1808
+ "loss": 0.0542,
1809
+ "step": 1285
1810
+ },
1811
+ {
1812
+ "epoch": 2.918552036199095,
1813
+ "grad_norm": 0.5526122450828552,
1814
+ "learning_rate": 6.388582022588241e-08,
1815
+ "loss": 0.063,
1816
+ "step": 1290
1817
+ },
1818
+ {
1819
+ "epoch": 2.9298642533936654,
1820
+ "grad_norm": 0.5424386262893677,
1821
+ "learning_rate": 4.779458971667205e-08,
1822
+ "loss": 0.0612,
1823
+ "step": 1295
1824
+ },
1825
+ {
1826
+ "epoch": 2.9411764705882355,
1827
+ "grad_norm": 0.49199432134628296,
1828
+ "learning_rate": 3.4030849695710905e-08,
1829
+ "loss": 0.0543,
1830
+ "step": 1300
1831
+ },
1832
+ {
1833
+ "epoch": 2.9524886877828056,
1834
+ "grad_norm": 0.608303427696228,
1835
+ "learning_rate": 2.2596742654564795e-08,
1836
+ "loss": 0.0646,
1837
+ "step": 1305
1838
+ },
1839
+ {
1840
+ "epoch": 2.9638009049773757,
1841
+ "grad_norm": 0.5874255895614624,
1842
+ "learning_rate": 1.3494048449426145e-08,
1843
+ "loss": 0.0589,
1844
+ "step": 1310
1845
+ },
1846
+ {
1847
+ "epoch": 2.975113122171946,
1848
+ "grad_norm": 0.6453235149383545,
1849
+ "learning_rate": 6.724184024057279e-09,
1850
+ "loss": 0.0556,
1851
+ "step": 1315
1852
+ },
1853
+ {
1854
+ "epoch": 2.986425339366516,
1855
+ "grad_norm": 0.6077714562416077,
1856
+ "learning_rate": 2.28820318922518e-09,
1857
+ "loss": 0.0674,
1858
+ "step": 1320
1859
+ },
1860
+ {
1861
+ "epoch": 2.997737556561086,
1862
+ "grad_norm": 0.9587042927742004,
1863
+ "learning_rate": 1.8679645866437335e-10,
1864
+ "loss": 0.0585,
1865
+ "step": 1325
1866
+ }
1867
+ ],
1868
+ "logging_steps": 5,
1869
+ "max_steps": 1326,
1870
+ "num_input_tokens_seen": 0,
1871
+ "num_train_epochs": 3,
1872
+ "save_steps": 20000,
1873
+ "stateful_callbacks": {
1874
+ "TrainerControl": {
1875
+ "args": {
1876
+ "should_epoch_stop": false,
1877
+ "should_evaluate": false,
1878
+ "should_log": false,
1879
+ "should_save": true,
1880
+ "should_training_stop": true
1881
+ },
1882
+ "attributes": {}
1883
+ }
1884
+ },
1885
+ "total_flos": 2.2645586738523668e+18,
1886
+ "train_batch_size": 2,
1887
+ "trial_name": null,
1888
+ "trial_params": null
1889
+ }
instruct/26_128_e3_3e-5/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57dd73f6428de88e002dd47511aefac39329537d12b1591d4f800d783b0db97b
3
+ size 8337
instruct/26_128_e3_3e-5/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
instruct/26_128_e3_3e-5/zero_to_fp32.py ADDED
@@ -0,0 +1,604 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright (c) Microsoft Corporation.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+
6
+ # DeepSpeed Team
7
+
8
+ # This script extracts fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints. It gets
9
+ # copied into the top level checkpoint dir, so the user can easily do the conversion at any point in
10
+ # the future. Once extracted, the weights don't require DeepSpeed and can be used in any
11
+ # application.
12
+ #
13
+ # example: python zero_to_fp32.py . pytorch_model.bin
14
+
15
+ import argparse
16
+ import torch
17
+ import glob
18
+ import math
19
+ import os
20
+ import re
21
+ from collections import OrderedDict
22
+ from dataclasses import dataclass
23
+
24
+ # while this script doesn't use deepspeed to recover data, since the checkpoints are pickled with
25
+ # DeepSpeed data structures it has to be available in the current python environment.
26
+ from deepspeed.utils import logger
27
+ from deepspeed.checkpoint.constants import (DS_VERSION, OPTIMIZER_STATE_DICT, SINGLE_PARTITION_OF_FP32_GROUPS,
28
+ FP32_FLAT_GROUPS, ZERO_STAGE, PARTITION_COUNT, PARAM_SHAPES, BUFFER_NAMES,
29
+ FROZEN_PARAM_SHAPES, FROZEN_PARAM_FRAGMENTS)
30
+
31
+
32
+ @dataclass
33
+ class zero_model_state:
34
+ buffers: dict()
35
+ param_shapes: dict()
36
+ shared_params: list
37
+ ds_version: int
38
+ frozen_param_shapes: dict()
39
+ frozen_param_fragments: dict()
40
+
41
+
42
+ debug = 0
43
+
44
+ # load to cpu
45
+ device = torch.device('cpu')
46
+
47
+
48
+ def atoi(text):
49
+ return int(text) if text.isdigit() else text
50
+
51
+
52
+ def natural_keys(text):
53
+ '''
54
+ alist.sort(key=natural_keys) sorts in human order
55
+ http://nedbatchelder.com/blog/200712/human_sorting.html
56
+ (See Toothy's implementation in the comments)
57
+ '''
58
+ return [atoi(c) for c in re.split(r'(\d+)', text)]
59
+
60
+
61
+ def get_model_state_file(checkpoint_dir, zero_stage):
62
+ if not os.path.isdir(checkpoint_dir):
63
+ raise FileNotFoundError(f"Directory '{checkpoint_dir}' doesn't exist")
64
+
65
+ # there should be only one file
66
+ if zero_stage <= 2:
67
+ file = os.path.join(checkpoint_dir, "mp_rank_00_model_states.pt")
68
+ elif zero_stage == 3:
69
+ file = os.path.join(checkpoint_dir, "zero_pp_rank_0_mp_rank_00_model_states.pt")
70
+
71
+ if not os.path.exists(file):
72
+ raise FileNotFoundError(f"can't find model states file at '{file}'")
73
+
74
+ return file
75
+
76
+
77
+ def get_checkpoint_files(checkpoint_dir, glob_pattern):
78
+ # XXX: need to test that this simple glob rule works for multi-node setup too
79
+ ckpt_files = sorted(glob.glob(os.path.join(checkpoint_dir, glob_pattern)), key=natural_keys)
80
+
81
+ if len(ckpt_files) == 0:
82
+ raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
83
+
84
+ return ckpt_files
85
+
86
+
87
+ def get_optim_files(checkpoint_dir):
88
+ return get_checkpoint_files(checkpoint_dir, "*_optim_states.pt")
89
+
90
+
91
+ def get_model_state_files(checkpoint_dir):
92
+ return get_checkpoint_files(checkpoint_dir, "*_model_states.pt")
93
+
94
+
95
+ def parse_model_states(files):
96
+ zero_model_states = []
97
+ for file in files:
98
+ state_dict = torch.load(file, map_location=device)
99
+
100
+ if BUFFER_NAMES not in state_dict:
101
+ raise ValueError(f"{file} is not a model state checkpoint")
102
+ buffer_names = state_dict[BUFFER_NAMES]
103
+ if debug:
104
+ print("Found buffers:", buffer_names)
105
+
106
+ # recover just the buffers while restoring them to fp32 if they were saved in fp16
107
+ buffers = {k: v.float() for k, v in state_dict["module"].items() if k in buffer_names}
108
+ param_shapes = state_dict[PARAM_SHAPES]
109
+
110
+ # collect parameters that are included in param_shapes
111
+ param_names = []
112
+ for s in param_shapes:
113
+ for name in s.keys():
114
+ param_names.append(name)
115
+
116
+ # update with frozen parameters
117
+ frozen_param_shapes = state_dict.get(FROZEN_PARAM_SHAPES, None)
118
+ if frozen_param_shapes is not None:
119
+ if debug:
120
+ print(f"Found frozen_param_shapes: {frozen_param_shapes}")
121
+ param_names += list(frozen_param_shapes.keys())
122
+
123
+ # handle shared params
124
+ shared_params = [[k, v] for k, v in state_dict["shared_params"].items()]
125
+
126
+ ds_version = state_dict.get(DS_VERSION, None)
127
+
128
+ frozen_param_fragments = state_dict.get(FROZEN_PARAM_FRAGMENTS, None)
129
+
130
+ z_model_state = zero_model_state(buffers=buffers,
131
+ param_shapes=param_shapes,
132
+ shared_params=shared_params,
133
+ ds_version=ds_version,
134
+ frozen_param_shapes=frozen_param_shapes,
135
+ frozen_param_fragments=frozen_param_fragments)
136
+ zero_model_states.append(z_model_state)
137
+
138
+ return zero_model_states
139
+
140
+
141
+ def parse_optim_states(files, ds_checkpoint_dir):
142
+
143
+ total_files = len(files)
144
+ state_dicts = []
145
+ for f in files:
146
+ state_dict = torch.load(f, map_location=device)
147
+ # immediately discard the potentially huge 2 optimizer states as we only care for fp32 master weights
148
+ # and also handle the case where it was already removed by another helper script
149
+ state_dict["optimizer_state_dict"].pop("optimizer_state_dict", None)
150
+ state_dicts.append(state_dict)
151
+
152
+ if not ZERO_STAGE in state_dicts[0][OPTIMIZER_STATE_DICT]:
153
+ raise ValueError(f"{files[0]} is not a zero checkpoint")
154
+ zero_stage = state_dicts[0][OPTIMIZER_STATE_DICT][ZERO_STAGE]
155
+ world_size = state_dicts[0][OPTIMIZER_STATE_DICT][PARTITION_COUNT]
156
+
157
+ # For ZeRO-2 each param group can have different partition_count as data parallelism for expert
158
+ # parameters can be different from data parallelism for non-expert parameters. So we can just
159
+ # use the max of the partition_count to get the dp world_size.
160
+
161
+ if type(world_size) is list:
162
+ world_size = max(world_size)
163
+
164
+ if world_size != total_files:
165
+ raise ValueError(
166
+ f"Expected {world_size} of '*_optim_states.pt' under '{ds_checkpoint_dir}' but found {total_files} files. "
167
+ "Possibly due to an overwrite of an old checkpoint, or a checkpoint didn't get saved by one or more processes."
168
+ )
169
+
170
+ # the groups are named differently in each stage
171
+ if zero_stage <= 2:
172
+ fp32_groups_key = SINGLE_PARTITION_OF_FP32_GROUPS
173
+ elif zero_stage == 3:
174
+ fp32_groups_key = FP32_FLAT_GROUPS
175
+ else:
176
+ raise ValueError(f"unknown zero stage {zero_stage}")
177
+
178
+ if zero_stage <= 2:
179
+ fp32_flat_groups = [state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key] for i in range(len(state_dicts))]
180
+ elif zero_stage == 3:
181
+ # if there is more than one param group, there will be multiple flattened tensors - one
182
+ # flattened tensor per group - for simplicity merge them into a single tensor
183
+ #
184
+ # XXX: could make the script more memory efficient for when there are multiple groups - it
185
+ # will require matching the sub-lists of param_shapes for each param group flattened tensor
186
+
187
+ fp32_flat_groups = [
188
+ torch.cat(state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key], 0) for i in range(len(state_dicts))
189
+ ]
190
+
191
+ return zero_stage, world_size, fp32_flat_groups
192
+
193
+
194
+ def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters):
195
+ """
196
+ Returns fp32 state_dict reconstructed from ds checkpoint
197
+
198
+ Args:
199
+ - ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
200
+
201
+ """
202
+ print(f"Processing zero checkpoint '{ds_checkpoint_dir}'")
203
+
204
+ optim_files = get_optim_files(ds_checkpoint_dir)
205
+ zero_stage, world_size, fp32_flat_groups = parse_optim_states(optim_files, ds_checkpoint_dir)
206
+ print(f"Detected checkpoint of type zero stage {zero_stage}, world_size: {world_size}")
207
+
208
+ model_files = get_model_state_files(ds_checkpoint_dir)
209
+
210
+ zero_model_states = parse_model_states(model_files)
211
+ print(f'Parsing checkpoint created by deepspeed=={zero_model_states[0].ds_version}')
212
+
213
+ if zero_stage <= 2:
214
+ return _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
215
+ exclude_frozen_parameters)
216
+ elif zero_stage == 3:
217
+ return _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
218
+ exclude_frozen_parameters)
219
+
220
+
221
+ def _zero2_merge_frozen_params(state_dict, zero_model_states):
222
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
223
+ return
224
+
225
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
226
+ frozen_param_fragments = zero_model_states[0].frozen_param_fragments
227
+
228
+ if debug:
229
+ num_elem = sum(s.numel() for s in frozen_param_shapes.values())
230
+ print(f'rank 0: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
231
+
232
+ wanted_params = len(frozen_param_shapes)
233
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
234
+ avail_numel = sum([p.numel() for p in frozen_param_fragments.values()])
235
+ print(f'Frozen params: Have {avail_numel} numels to process.')
236
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
237
+
238
+ total_params = 0
239
+ total_numel = 0
240
+ for name, shape in frozen_param_shapes.items():
241
+ total_params += 1
242
+ unpartitioned_numel = shape.numel()
243
+ total_numel += unpartitioned_numel
244
+
245
+ state_dict[name] = frozen_param_fragments[name]
246
+
247
+ if debug:
248
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
249
+
250
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
251
+
252
+
253
+ def _has_callable(obj, fn):
254
+ attr = getattr(obj, fn, None)
255
+ return callable(attr)
256
+
257
+
258
+ def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
259
+ param_shapes = zero_model_states[0].param_shapes
260
+
261
+ # Reconstruction protocol:
262
+ #
263
+ # XXX: document this
264
+
265
+ if debug:
266
+ for i in range(world_size):
267
+ for j in range(len(fp32_flat_groups[0])):
268
+ print(f"{FP32_FLAT_GROUPS}[{i}][{j}].shape={fp32_flat_groups[i][j].shape}")
269
+
270
+ # XXX: memory usage doubles here (zero2)
271
+ num_param_groups = len(fp32_flat_groups[0])
272
+ merged_single_partition_of_fp32_groups = []
273
+ for i in range(num_param_groups):
274
+ merged_partitions = [sd[i] for sd in fp32_flat_groups]
275
+ full_single_fp32_vector = torch.cat(merged_partitions, 0)
276
+ merged_single_partition_of_fp32_groups.append(full_single_fp32_vector)
277
+ avail_numel = sum(
278
+ [full_single_fp32_vector.numel() for full_single_fp32_vector in merged_single_partition_of_fp32_groups])
279
+
280
+ if debug:
281
+ wanted_params = sum([len(shapes) for shapes in param_shapes])
282
+ wanted_numel = sum([sum(shape.numel() for shape in shapes.values()) for shapes in param_shapes])
283
+ # not asserting if there is a mismatch due to possible padding
284
+ print(f"Have {avail_numel} numels to process.")
285
+ print(f"Need {wanted_numel} numels in {wanted_params} params.")
286
+
287
+ # params
288
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
289
+ # out-of-core computing solution
290
+ total_numel = 0
291
+ total_params = 0
292
+ for shapes, full_single_fp32_vector in zip(param_shapes, merged_single_partition_of_fp32_groups):
293
+ offset = 0
294
+ avail_numel = full_single_fp32_vector.numel()
295
+ for name, shape in shapes.items():
296
+
297
+ unpartitioned_numel = shape.numel() if _has_callable(shape, 'numel') else math.prod(shape)
298
+ total_numel += unpartitioned_numel
299
+ total_params += 1
300
+
301
+ if debug:
302
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
303
+ state_dict[name] = full_single_fp32_vector.narrow(0, offset, unpartitioned_numel).view(shape)
304
+ offset += unpartitioned_numel
305
+
306
+ # Z2 started to align to 2*world_size to improve nccl performance. Therefore both offset and
307
+ # avail_numel can differ by anywhere between 0..2*world_size. Due to two unrelated complex
308
+ # paddings performed in the code it's almost impossible to predict the exact numbers w/o the
309
+ # live optimizer object, so we are checking that the numbers are within the right range
310
+ align_to = 2 * world_size
311
+
312
+ def zero2_align(x):
313
+ return align_to * math.ceil(x / align_to)
314
+
315
+ if debug:
316
+ print(f"original offset={offset}, avail_numel={avail_numel}")
317
+
318
+ offset = zero2_align(offset)
319
+ avail_numel = zero2_align(avail_numel)
320
+
321
+ if debug:
322
+ print(f"aligned offset={offset}, avail_numel={avail_numel}")
323
+
324
+ # Sanity check
325
+ if offset != avail_numel:
326
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
327
+
328
+ print(f"Reconstructed fp32 state dict with {total_params} params {total_numel} elements")
329
+
330
+
331
+ def _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
332
+ exclude_frozen_parameters):
333
+ state_dict = OrderedDict()
334
+
335
+ # buffers
336
+ buffers = zero_model_states[0].buffers
337
+ state_dict.update(buffers)
338
+ if debug:
339
+ print(f"added {len(buffers)} buffers")
340
+
341
+ if not exclude_frozen_parameters:
342
+ _zero2_merge_frozen_params(state_dict, zero_model_states)
343
+
344
+ _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
345
+
346
+ # recover shared parameters
347
+ for pair in zero_model_states[0].shared_params:
348
+ if pair[1] in state_dict:
349
+ state_dict[pair[0]] = state_dict[pair[1]]
350
+
351
+ return state_dict
352
+
353
+
354
+ def zero3_partitioned_param_info(unpartitioned_numel, world_size):
355
+ remainder = unpartitioned_numel % world_size
356
+ padding_numel = (world_size - remainder) if remainder else 0
357
+ partitioned_numel = math.ceil(unpartitioned_numel / world_size)
358
+ return partitioned_numel, padding_numel
359
+
360
+
361
+ def _zero3_merge_frozen_params(state_dict, world_size, zero_model_states):
362
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
363
+ return
364
+
365
+ if debug:
366
+ for i in range(world_size):
367
+ num_elem = sum(s.numel() for s in zero_model_states[i].frozen_param_fragments.values())
368
+ print(f'rank {i}: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
369
+
370
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
371
+ wanted_params = len(frozen_param_shapes)
372
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
373
+ avail_numel = sum([p.numel() for p in zero_model_states[0].frozen_param_fragments.values()]) * world_size
374
+ print(f'Frozen params: Have {avail_numel} numels to process.')
375
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
376
+
377
+ total_params = 0
378
+ total_numel = 0
379
+ for name, shape in zero_model_states[0].frozen_param_shapes.items():
380
+ total_params += 1
381
+ unpartitioned_numel = shape.numel()
382
+ total_numel += unpartitioned_numel
383
+
384
+ param_frags = tuple(model_state.frozen_param_fragments[name] for model_state in zero_model_states)
385
+ state_dict[name] = torch.cat(param_frags, 0).narrow(0, 0, unpartitioned_numel).view(shape)
386
+
387
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
388
+
389
+ if debug:
390
+ print(
391
+ f"Frozen params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
392
+ )
393
+
394
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
395
+
396
+
397
+ def _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
398
+ param_shapes = zero_model_states[0].param_shapes
399
+ avail_numel = fp32_flat_groups[0].numel() * world_size
400
+ # Reconstruction protocol: For zero3 we need to zip the partitions together at boundary of each
401
+ # param, re-consolidating each param, while dealing with padding if any
402
+
403
+ # merge list of dicts, preserving order
404
+ param_shapes = {k: v for d in param_shapes for k, v in d.items()}
405
+
406
+ if debug:
407
+ for i in range(world_size):
408
+ print(f"{FP32_FLAT_GROUPS}[{i}].shape={fp32_flat_groups[i].shape}")
409
+
410
+ wanted_params = len(param_shapes)
411
+ wanted_numel = sum(shape.numel() for shape in param_shapes.values())
412
+ # not asserting if there is a mismatch due to possible padding
413
+ avail_numel = fp32_flat_groups[0].numel() * world_size
414
+ print(f"Trainable params: Have {avail_numel} numels to process.")
415
+ print(f"Trainable params: Need {wanted_numel} numels in {wanted_params} params.")
416
+
417
+ # params
418
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
419
+ # out-of-core computing solution
420
+ offset = 0
421
+ total_numel = 0
422
+ total_params = 0
423
+ for name, shape in param_shapes.items():
424
+
425
+ unpartitioned_numel = shape.numel()
426
+ total_numel += unpartitioned_numel
427
+ total_params += 1
428
+
429
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
430
+
431
+ if debug:
432
+ print(
433
+ f"Trainable params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
434
+ )
435
+
436
+ # XXX: memory usage doubles here
437
+ state_dict[name] = torch.cat(
438
+ tuple(fp32_flat_groups[i].narrow(0, offset, partitioned_numel) for i in range(world_size)),
439
+ 0).narrow(0, 0, unpartitioned_numel).view(shape)
440
+ offset += partitioned_numel
441
+
442
+ offset *= world_size
443
+
444
+ # Sanity check
445
+ if offset != avail_numel:
446
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
447
+
448
+ print(f"Reconstructed Trainable fp32 state dict with {total_params} params {total_numel} elements")
449
+
450
+
451
+ def _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
452
+ exclude_frozen_parameters):
453
+ state_dict = OrderedDict()
454
+
455
+ # buffers
456
+ buffers = zero_model_states[0].buffers
457
+ state_dict.update(buffers)
458
+ if debug:
459
+ print(f"added {len(buffers)} buffers")
460
+
461
+ if not exclude_frozen_parameters:
462
+ _zero3_merge_frozen_params(state_dict, world_size, zero_model_states)
463
+
464
+ _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
465
+
466
+ # recover shared parameters
467
+ for pair in zero_model_states[0].shared_params:
468
+ if pair[1] in state_dict:
469
+ state_dict[pair[0]] = state_dict[pair[1]]
470
+
471
+ return state_dict
472
+
473
+
474
+ def get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag=None, exclude_frozen_parameters=False):
475
+ """
476
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated state_dict that can be loaded with
477
+ ``load_state_dict()`` and used for training without DeepSpeed or shared with others, for example
478
+ via a model hub.
479
+
480
+ Args:
481
+ - ``checkpoint_dir``: path to the desired checkpoint folder
482
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in 'latest' file. e.g., ``global_step14``
483
+ - ``exclude_frozen_parameters``: exclude frozen parameters
484
+
485
+ Returns:
486
+ - pytorch ``state_dict``
487
+
488
+ Note: this approach may not work if your application doesn't have sufficient free CPU memory and
489
+ you may need to use the offline approach using the ``zero_to_fp32.py`` script that is saved with
490
+ the checkpoint.
491
+
492
+ A typical usage might be ::
493
+
494
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
495
+ # do the training and checkpoint saving
496
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
497
+ model = model.cpu() # move to cpu
498
+ model.load_state_dict(state_dict)
499
+ # submit to model hub or save the model to share with others
500
+
501
+ In this example the ``model`` will no longer be usable in the deepspeed context of the same
502
+ application. i.e. you will need to re-initialize the deepspeed engine, since
503
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
504
+
505
+ If you want it all done for you, use ``load_state_dict_from_zero_checkpoint`` instead.
506
+
507
+ """
508
+ if tag is None:
509
+ latest_path = os.path.join(checkpoint_dir, 'latest')
510
+ if os.path.isfile(latest_path):
511
+ with open(latest_path, 'r') as fd:
512
+ tag = fd.read().strip()
513
+ else:
514
+ raise ValueError(f"Unable to find 'latest' file at {latest_path}")
515
+
516
+ ds_checkpoint_dir = os.path.join(checkpoint_dir, tag)
517
+
518
+ if not os.path.isdir(ds_checkpoint_dir):
519
+ raise FileNotFoundError(f"Directory '{ds_checkpoint_dir}' doesn't exist")
520
+
521
+ return _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters)
522
+
523
+
524
+ def convert_zero_checkpoint_to_fp32_state_dict(checkpoint_dir, output_file, tag=None, exclude_frozen_parameters=False):
525
+ """
526
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
527
+ loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
528
+
529
+ Args:
530
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
531
+ - ``output_file``: path to the pytorch fp32 state_dict output file (e.g. path/pytorch_model.bin)
532
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
533
+ - ``exclude_frozen_parameters``: exclude frozen parameters
534
+ """
535
+
536
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag, exclude_frozen_parameters)
537
+ print(f"Saving fp32 state dict to {output_file}")
538
+ torch.save(state_dict, output_file)
539
+
540
+
541
+ def load_state_dict_from_zero_checkpoint(model, checkpoint_dir, tag=None):
542
+ """
543
+ 1. Put the provided model to cpu
544
+ 2. Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict``
545
+ 3. Load it into the provided model
546
+
547
+ Args:
548
+ - ``model``: the model object to update
549
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
550
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
551
+
552
+ Returns:
553
+ - ``model`: modified model
554
+
555
+ Make sure you have plenty of CPU memory available before you call this function. If you don't
556
+ have enough use the ``zero_to_fp32.py`` utility to do the conversion. You will find it
557
+ conveniently placed for you in the checkpoint folder.
558
+
559
+ A typical usage might be ::
560
+
561
+ from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
562
+ model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
563
+ # submit to model hub or save the model to share with others
564
+
565
+ Note, that once this was run, the ``model`` will no longer be usable in the deepspeed context
566
+ of the same application. i.e. you will need to re-initialize the deepspeed engine, since
567
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
568
+
569
+ """
570
+ logger.info(f"Extracting fp32 weights")
571
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
572
+
573
+ logger.info(f"Overwriting model with fp32 weights")
574
+ model = model.cpu()
575
+ model.load_state_dict(state_dict, strict=False)
576
+
577
+ return model
578
+
579
+
580
+ if __name__ == "__main__":
581
+
582
+ parser = argparse.ArgumentParser()
583
+ parser.add_argument("checkpoint_dir",
584
+ type=str,
585
+ help="path to the desired checkpoint folder, e.g., path/checkpoint-12")
586
+ parser.add_argument(
587
+ "output_file",
588
+ type=str,
589
+ help="path to the pytorch fp32 state_dict output file (e.g. path/checkpoint-12/pytorch_model.bin)")
590
+ parser.add_argument("-t",
591
+ "--tag",
592
+ type=str,
593
+ default=None,
594
+ help="checkpoint tag used as a unique identifier for checkpoint. e.g., global_step1")
595
+ parser.add_argument("--exclude_frozen_parameters", action='store_true', help="exclude frozen parameters")
596
+ parser.add_argument("-d", "--debug", action='store_true', help="enable debug")
597
+ args = parser.parse_args()
598
+
599
+ debug = args.debug
600
+
601
+ convert_zero_checkpoint_to_fp32_state_dict(args.checkpoint_dir,
602
+ args.output_file,
603
+ tag=args.tag,
604
+ exclude_frozen_parameters=args.exclude_frozen_parameters)