shareit commited on
Commit
d9b4008
·
verified ·
1 Parent(s): f37417e

Training in progress, step 300, checkpoint

Browse files
last-checkpoint/README.md ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: unsloth/phi-4-reasoning-unsloth-bnb-4bit
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:unsloth/phi-4-reasoning-unsloth-bnb-4bit
7
+ - lora
8
+ - sft
9
+ - transformers
10
+ - trl
11
+ - unsloth
12
+ ---
13
+
14
+ # Model Card for Model ID
15
+
16
+ <!-- Provide a quick summary of what the model is/does. -->
17
+
18
+
19
+
20
+ ## Model Details
21
+
22
+ ### Model Description
23
+
24
+ <!-- Provide a longer summary of what this model is. -->
25
+
26
+
27
+
28
+ - **Developed by:** [More Information Needed]
29
+ - **Funded by [optional]:** [More Information Needed]
30
+ - **Shared by [optional]:** [More Information Needed]
31
+ - **Model type:** [More Information Needed]
32
+ - **Language(s) (NLP):** [More Information Needed]
33
+ - **License:** [More Information Needed]
34
+ - **Finetuned from model [optional]:** [More Information Needed]
35
+
36
+ ### Model Sources [optional]
37
+
38
+ <!-- Provide the basic links for the model. -->
39
+
40
+ - **Repository:** [More Information Needed]
41
+ - **Paper [optional]:** [More Information Needed]
42
+ - **Demo [optional]:** [More Information Needed]
43
+
44
+ ## Uses
45
+
46
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
47
+
48
+ ### Direct Use
49
+
50
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
51
+
52
+ [More Information Needed]
53
+
54
+ ### Downstream Use [optional]
55
+
56
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
57
+
58
+ [More Information Needed]
59
+
60
+ ### Out-of-Scope Use
61
+
62
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
63
+
64
+ [More Information Needed]
65
+
66
+ ## Bias, Risks, and Limitations
67
+
68
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
69
+
70
+ [More Information Needed]
71
+
72
+ ### Recommendations
73
+
74
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
75
+
76
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
77
+
78
+ ## How to Get Started with the Model
79
+
80
+ Use the code below to get started with the model.
81
+
82
+ [More Information Needed]
83
+
84
+ ## Training Details
85
+
86
+ ### Training Data
87
+
88
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
89
+
90
+ [More Information Needed]
91
+
92
+ ### Training Procedure
93
+
94
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
95
+
96
+ #### Preprocessing [optional]
97
+
98
+ [More Information Needed]
99
+
100
+
101
+ #### Training Hyperparameters
102
+
103
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
104
+
105
+ #### Speeds, Sizes, Times [optional]
106
+
107
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
108
+
109
+ [More Information Needed]
110
+
111
+ ## Evaluation
112
+
113
+ <!-- This section describes the evaluation protocols and provides the results. -->
114
+
115
+ ### Testing Data, Factors & Metrics
116
+
117
+ #### Testing Data
118
+
119
+ <!-- This should link to a Dataset Card if possible. -->
120
+
121
+ [More Information Needed]
122
+
123
+ #### Factors
124
+
125
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
126
+
127
+ [More Information Needed]
128
+
129
+ #### Metrics
130
+
131
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
132
+
133
+ [More Information Needed]
134
+
135
+ ### Results
136
+
137
+ [More Information Needed]
138
+
139
+ #### Summary
140
+
141
+
142
+
143
+ ## Model Examination [optional]
144
+
145
+ <!-- Relevant interpretability work for the model goes here -->
146
+
147
+ [More Information Needed]
148
+
149
+ ## Environmental Impact
150
+
151
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
152
+
153
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
154
+
155
+ - **Hardware Type:** [More Information Needed]
156
+ - **Hours used:** [More Information Needed]
157
+ - **Cloud Provider:** [More Information Needed]
158
+ - **Compute Region:** [More Information Needed]
159
+ - **Carbon Emitted:** [More Information Needed]
160
+
161
+ ## Technical Specifications [optional]
162
+
163
+ ### Model Architecture and Objective
164
+
165
+ [More Information Needed]
166
+
167
+ ### Compute Infrastructure
168
+
169
+ [More Information Needed]
170
+
171
+ #### Hardware
172
+
173
+ [More Information Needed]
174
+
175
+ #### Software
176
+
177
+ [More Information Needed]
178
+
179
+ ## Citation [optional]
180
+
181
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
182
+
183
+ **BibTeX:**
184
+
185
+ [More Information Needed]
186
+
187
+ **APA:**
188
+
189
+ [More Information Needed]
190
+
191
+ ## Glossary [optional]
192
+
193
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
194
+
195
+ [More Information Needed]
196
+
197
+ ## More Information [optional]
198
+
199
+ [More Information Needed]
200
+
201
+ ## Model Card Authors [optional]
202
+
203
+ [More Information Needed]
204
+
205
+ ## Model Card Contact
206
+
207
+ [More Information Needed]
208
+ ### Framework versions
209
+
210
+ - PEFT 0.18.1
last-checkpoint/adapter_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": {
6
+ "base_model_class": "Phi3ForCausalLM",
7
+ "parent_library": "transformers.models.phi3.modeling_phi3",
8
+ "unsloth_fixed": true
9
+ },
10
+ "base_model_name_or_path": "unsloth/phi-4-reasoning-unsloth-bnb-4bit",
11
+ "bias": "none",
12
+ "corda_config": null,
13
+ "ensure_weight_tying": false,
14
+ "eva_config": null,
15
+ "exclude_modules": null,
16
+ "fan_in_fan_out": false,
17
+ "inference_mode": true,
18
+ "init_lora_weights": true,
19
+ "layer_replication": null,
20
+ "layers_pattern": null,
21
+ "layers_to_transform": null,
22
+ "loftq_config": {},
23
+ "lora_alpha": 64,
24
+ "lora_bias": false,
25
+ "lora_dropout": 0.0,
26
+ "megatron_config": null,
27
+ "megatron_core": "megatron.core",
28
+ "modules_to_save": null,
29
+ "peft_type": "LORA",
30
+ "peft_version": "0.18.1",
31
+ "qalora_group_size": 16,
32
+ "r": 64,
33
+ "rank_pattern": {},
34
+ "revision": null,
35
+ "target_modules": [
36
+ "up_proj",
37
+ "o_proj",
38
+ "k_proj",
39
+ "down_proj",
40
+ "q_proj",
41
+ "v_proj",
42
+ "gate_proj"
43
+ ],
44
+ "target_parameters": null,
45
+ "task_type": "CAUSAL_LM",
46
+ "trainable_token_indices": null,
47
+ "use_dora": false,
48
+ "use_qalora": false,
49
+ "use_rslora": false
50
+ }
last-checkpoint/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e974a9e9871067d6087b72108af8673328df742e190e49e5adeb5ba13851cbff
3
+ size 340808816
last-checkpoint/chat_template.jinja ADDED
@@ -0,0 +1 @@
 
 
1
+ {% for message in messages %}{% if (message['role'] == 'system') %}{{'<|im_start|>system<|im_sep|>' + message['content'] + '<|im_end|>'}}{% elif (message['role'] == 'user') %}{{'<|im_start|>user<|im_sep|>' + message['content'] + '<|im_end|>'}}{% elif (message['role'] == 'assistant') %}{{'<|im_start|>assistant<|im_sep|>' + message['content'] + '<|im_end|>'}}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant<|im_sep|>' }}{% endif %}
last-checkpoint/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ad07eed758690626af66f0050d6785b86fe5e46f94e657c2b3ae425412418da
3
+ size 173247691
last-checkpoint/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61c19bab1174704a4a4441475683bf1270277af15d2e2c95e964789128e482c4
3
+ size 14645
last-checkpoint/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c59f6c0c3c7ac90729151078281ffd3574d86b4063a767b8e0ef9e412646652
3
+ size 1465
last-checkpoint/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
last-checkpoint/tokenizer_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "backend": "tokenizers",
4
+ "bos_token": "<|endoftext|>",
5
+ "clean_up_tokenization_spaces": false,
6
+ "eos_token": "<|im_end|>",
7
+ "is_local": false,
8
+ "model_max_length": 32768,
9
+ "pad_token": "<|dummy_85|>",
10
+ "padding_side": "right",
11
+ "tokenizer_class": "TokenizersBackend",
12
+ "unk_token": "�"
13
+ }
last-checkpoint/trainer_state.json ADDED
@@ -0,0 +1,2134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 0.36809815950920244,
6
+ "eval_steps": 500,
7
+ "global_step": 300,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.001226993865030675,
14
+ "grad_norm": 0.056844476610422134,
15
+ "learning_rate": 0.0,
16
+ "loss": 1.5532171726226807,
17
+ "step": 1
18
+ },
19
+ {
20
+ "epoch": 0.00245398773006135,
21
+ "grad_norm": 0.05398453772068024,
22
+ "learning_rate": 7.668711656441719e-08,
23
+ "loss": 1.5773115158081055,
24
+ "step": 2
25
+ },
26
+ {
27
+ "epoch": 0.0036809815950920245,
28
+ "grad_norm": 0.06344727426767349,
29
+ "learning_rate": 1.5337423312883438e-07,
30
+ "loss": 1.3505765199661255,
31
+ "step": 3
32
+ },
33
+ {
34
+ "epoch": 0.0049079754601227,
35
+ "grad_norm": 0.06651192903518677,
36
+ "learning_rate": 2.3006134969325155e-07,
37
+ "loss": 1.4871529340744019,
38
+ "step": 4
39
+ },
40
+ {
41
+ "epoch": 0.006134969325153374,
42
+ "grad_norm": 0.06967559456825256,
43
+ "learning_rate": 3.0674846625766876e-07,
44
+ "loss": 1.3247398138046265,
45
+ "step": 5
46
+ },
47
+ {
48
+ "epoch": 0.007361963190184049,
49
+ "grad_norm": 0.05090988427400589,
50
+ "learning_rate": 3.834355828220859e-07,
51
+ "loss": 1.4368259906768799,
52
+ "step": 6
53
+ },
54
+ {
55
+ "epoch": 0.008588957055214725,
56
+ "grad_norm": 0.05639501288533211,
57
+ "learning_rate": 4.601226993865031e-07,
58
+ "loss": 1.4667942523956299,
59
+ "step": 7
60
+ },
61
+ {
62
+ "epoch": 0.0098159509202454,
63
+ "grad_norm": 0.04931354150176048,
64
+ "learning_rate": 5.368098159509203e-07,
65
+ "loss": 1.4420663118362427,
66
+ "step": 8
67
+ },
68
+ {
69
+ "epoch": 0.011042944785276074,
70
+ "grad_norm": 0.06962227076292038,
71
+ "learning_rate": 6.134969325153375e-07,
72
+ "loss": 1.3270829916000366,
73
+ "step": 9
74
+ },
75
+ {
76
+ "epoch": 0.012269938650306749,
77
+ "grad_norm": 0.05681212246417999,
78
+ "learning_rate": 6.901840490797546e-07,
79
+ "loss": 1.3849859237670898,
80
+ "step": 10
81
+ },
82
+ {
83
+ "epoch": 0.013496932515337423,
84
+ "grad_norm": 0.06245573237538338,
85
+ "learning_rate": 7.668711656441718e-07,
86
+ "loss": 1.390697956085205,
87
+ "step": 11
88
+ },
89
+ {
90
+ "epoch": 0.014723926380368098,
91
+ "grad_norm": 0.051362067461013794,
92
+ "learning_rate": 8.43558282208589e-07,
93
+ "loss": 1.5184454917907715,
94
+ "step": 12
95
+ },
96
+ {
97
+ "epoch": 0.015950920245398775,
98
+ "grad_norm": 0.057539232075214386,
99
+ "learning_rate": 9.202453987730062e-07,
100
+ "loss": 1.318811297416687,
101
+ "step": 13
102
+ },
103
+ {
104
+ "epoch": 0.01717791411042945,
105
+ "grad_norm": 0.05592890456318855,
106
+ "learning_rate": 9.969325153374232e-07,
107
+ "loss": 1.2885284423828125,
108
+ "step": 14
109
+ },
110
+ {
111
+ "epoch": 0.018404907975460124,
112
+ "grad_norm": 0.07110682129859924,
113
+ "learning_rate": 1.0736196319018406e-06,
114
+ "loss": 1.3424336910247803,
115
+ "step": 15
116
+ },
117
+ {
118
+ "epoch": 0.0196319018404908,
119
+ "grad_norm": 0.05588489770889282,
120
+ "learning_rate": 1.1503067484662577e-06,
121
+ "loss": 1.4609296321868896,
122
+ "step": 16
123
+ },
124
+ {
125
+ "epoch": 0.020858895705521473,
126
+ "grad_norm": 0.05414435639977455,
127
+ "learning_rate": 1.226993865030675e-06,
128
+ "loss": 1.346307635307312,
129
+ "step": 17
130
+ },
131
+ {
132
+ "epoch": 0.022085889570552148,
133
+ "grad_norm": 0.05896000191569328,
134
+ "learning_rate": 1.303680981595092e-06,
135
+ "loss": 1.2913644313812256,
136
+ "step": 18
137
+ },
138
+ {
139
+ "epoch": 0.023312883435582823,
140
+ "grad_norm": 0.05400668457150459,
141
+ "learning_rate": 1.3803680981595093e-06,
142
+ "loss": 1.3340139389038086,
143
+ "step": 19
144
+ },
145
+ {
146
+ "epoch": 0.024539877300613498,
147
+ "grad_norm": 0.051172543317079544,
148
+ "learning_rate": 1.4570552147239264e-06,
149
+ "loss": 1.5014619827270508,
150
+ "step": 20
151
+ },
152
+ {
153
+ "epoch": 0.025766871165644172,
154
+ "grad_norm": 0.06336528807878494,
155
+ "learning_rate": 1.5337423312883435e-06,
156
+ "loss": 1.4562170505523682,
157
+ "step": 21
158
+ },
159
+ {
160
+ "epoch": 0.026993865030674847,
161
+ "grad_norm": 0.05446509271860123,
162
+ "learning_rate": 1.6104294478527609e-06,
163
+ "loss": 1.4978207349777222,
164
+ "step": 22
165
+ },
166
+ {
167
+ "epoch": 0.02822085889570552,
168
+ "grad_norm": 0.053644318133592606,
169
+ "learning_rate": 1.687116564417178e-06,
170
+ "loss": 1.3169641494750977,
171
+ "step": 23
172
+ },
173
+ {
174
+ "epoch": 0.029447852760736196,
175
+ "grad_norm": 0.059931203722953796,
176
+ "learning_rate": 1.763803680981595e-06,
177
+ "loss": 1.388343095779419,
178
+ "step": 24
179
+ },
180
+ {
181
+ "epoch": 0.03067484662576687,
182
+ "grad_norm": 0.05717086419463158,
183
+ "learning_rate": 1.8404907975460124e-06,
184
+ "loss": 1.5201928615570068,
185
+ "step": 25
186
+ },
187
+ {
188
+ "epoch": 0.03190184049079755,
189
+ "grad_norm": 0.07009559869766235,
190
+ "learning_rate": 1.9171779141104296e-06,
191
+ "loss": 1.4164372682571411,
192
+ "step": 26
193
+ },
194
+ {
195
+ "epoch": 0.033128834355828224,
196
+ "grad_norm": 0.07497748732566833,
197
+ "learning_rate": 1.9938650306748465e-06,
198
+ "loss": 1.525839924812317,
199
+ "step": 27
200
+ },
201
+ {
202
+ "epoch": 0.0343558282208589,
203
+ "grad_norm": 0.05568641796708107,
204
+ "learning_rate": 2.070552147239264e-06,
205
+ "loss": 1.2813575267791748,
206
+ "step": 28
207
+ },
208
+ {
209
+ "epoch": 0.03558282208588957,
210
+ "grad_norm": 0.06756455451250076,
211
+ "learning_rate": 2.147239263803681e-06,
212
+ "loss": 1.4711202383041382,
213
+ "step": 29
214
+ },
215
+ {
216
+ "epoch": 0.03680981595092025,
217
+ "grad_norm": 0.05878760293126106,
218
+ "learning_rate": 2.223926380368098e-06,
219
+ "loss": 1.436352252960205,
220
+ "step": 30
221
+ },
222
+ {
223
+ "epoch": 0.03803680981595092,
224
+ "grad_norm": 0.06954875588417053,
225
+ "learning_rate": 2.3006134969325154e-06,
226
+ "loss": 1.5035226345062256,
227
+ "step": 31
228
+ },
229
+ {
230
+ "epoch": 0.0392638036809816,
231
+ "grad_norm": 0.05211090296506882,
232
+ "learning_rate": 2.3773006134969327e-06,
233
+ "loss": 1.2772533893585205,
234
+ "step": 32
235
+ },
236
+ {
237
+ "epoch": 0.04049079754601227,
238
+ "grad_norm": 0.05606016516685486,
239
+ "learning_rate": 2.45398773006135e-06,
240
+ "loss": 1.3446189165115356,
241
+ "step": 33
242
+ },
243
+ {
244
+ "epoch": 0.04171779141104295,
245
+ "grad_norm": 0.06906063854694366,
246
+ "learning_rate": 2.530674846625767e-06,
247
+ "loss": 1.5665283203125,
248
+ "step": 34
249
+ },
250
+ {
251
+ "epoch": 0.04294478527607362,
252
+ "grad_norm": 0.0749950110912323,
253
+ "learning_rate": 2.607361963190184e-06,
254
+ "loss": 1.3163764476776123,
255
+ "step": 35
256
+ },
257
+ {
258
+ "epoch": 0.044171779141104296,
259
+ "grad_norm": 0.06453581154346466,
260
+ "learning_rate": 2.6840490797546016e-06,
261
+ "loss": 1.3515610694885254,
262
+ "step": 36
263
+ },
264
+ {
265
+ "epoch": 0.04539877300613497,
266
+ "grad_norm": 0.06933236867189407,
267
+ "learning_rate": 2.7607361963190186e-06,
268
+ "loss": 1.391564130783081,
269
+ "step": 37
270
+ },
271
+ {
272
+ "epoch": 0.046625766871165646,
273
+ "grad_norm": 0.06258511543273926,
274
+ "learning_rate": 2.8374233128834355e-06,
275
+ "loss": 1.3991773128509521,
276
+ "step": 38
277
+ },
278
+ {
279
+ "epoch": 0.04785276073619632,
280
+ "grad_norm": 0.06230465695261955,
281
+ "learning_rate": 2.914110429447853e-06,
282
+ "loss": 1.4381181001663208,
283
+ "step": 39
284
+ },
285
+ {
286
+ "epoch": 0.049079754601226995,
287
+ "grad_norm": 0.07217125594615936,
288
+ "learning_rate": 2.99079754601227e-06,
289
+ "loss": 1.676390290260315,
290
+ "step": 40
291
+ },
292
+ {
293
+ "epoch": 0.05030674846625767,
294
+ "grad_norm": 0.07924707233905792,
295
+ "learning_rate": 3.067484662576687e-06,
296
+ "loss": 1.437603235244751,
297
+ "step": 41
298
+ },
299
+ {
300
+ "epoch": 0.051533742331288344,
301
+ "grad_norm": 0.07502411305904388,
302
+ "learning_rate": 3.1441717791411044e-06,
303
+ "loss": 1.4542899131774902,
304
+ "step": 42
305
+ },
306
+ {
307
+ "epoch": 0.05276073619631902,
308
+ "grad_norm": 0.08459946513175964,
309
+ "learning_rate": 3.2208588957055217e-06,
310
+ "loss": 1.5027896165847778,
311
+ "step": 43
312
+ },
313
+ {
314
+ "epoch": 0.053987730061349694,
315
+ "grad_norm": 0.06554778665304184,
316
+ "learning_rate": 3.2975460122699386e-06,
317
+ "loss": 1.3057382106781006,
318
+ "step": 44
319
+ },
320
+ {
321
+ "epoch": 0.05521472392638037,
322
+ "grad_norm": 0.06956043839454651,
323
+ "learning_rate": 3.374233128834356e-06,
324
+ "loss": 1.3026341199874878,
325
+ "step": 45
326
+ },
327
+ {
328
+ "epoch": 0.05644171779141104,
329
+ "grad_norm": 0.09831614047288895,
330
+ "learning_rate": 3.4509202453987733e-06,
331
+ "loss": 1.357859492301941,
332
+ "step": 46
333
+ },
334
+ {
335
+ "epoch": 0.05766871165644172,
336
+ "grad_norm": 0.09383076429367065,
337
+ "learning_rate": 3.52760736196319e-06,
338
+ "loss": 1.3813140392303467,
339
+ "step": 47
340
+ },
341
+ {
342
+ "epoch": 0.05889570552147239,
343
+ "grad_norm": 0.07468827068805695,
344
+ "learning_rate": 3.6042944785276075e-06,
345
+ "loss": 1.5053529739379883,
346
+ "step": 48
347
+ },
348
+ {
349
+ "epoch": 0.06012269938650307,
350
+ "grad_norm": 0.07278682291507721,
351
+ "learning_rate": 3.680981595092025e-06,
352
+ "loss": 1.3573490381240845,
353
+ "step": 49
354
+ },
355
+ {
356
+ "epoch": 0.06134969325153374,
357
+ "grad_norm": 0.07713132351636887,
358
+ "learning_rate": 3.7576687116564418e-06,
359
+ "loss": 1.389432430267334,
360
+ "step": 50
361
+ },
362
+ {
363
+ "epoch": 0.06257668711656442,
364
+ "grad_norm": 0.08927592635154724,
365
+ "learning_rate": 3.834355828220859e-06,
366
+ "loss": 1.354691982269287,
367
+ "step": 51
368
+ },
369
+ {
370
+ "epoch": 0.0638036809815951,
371
+ "grad_norm": 0.07328439503908157,
372
+ "learning_rate": 3.911042944785277e-06,
373
+ "loss": 1.4615235328674316,
374
+ "step": 52
375
+ },
376
+ {
377
+ "epoch": 0.06503067484662577,
378
+ "grad_norm": 0.1023353561758995,
379
+ "learning_rate": 3.987730061349693e-06,
380
+ "loss": 1.2745380401611328,
381
+ "step": 53
382
+ },
383
+ {
384
+ "epoch": 0.06625766871165645,
385
+ "grad_norm": 0.09883410483598709,
386
+ "learning_rate": 4.064417177914111e-06,
387
+ "loss": 1.536147952079773,
388
+ "step": 54
389
+ },
390
+ {
391
+ "epoch": 0.06748466257668712,
392
+ "grad_norm": 0.0988430604338646,
393
+ "learning_rate": 4.141104294478528e-06,
394
+ "loss": 1.233506679534912,
395
+ "step": 55
396
+ },
397
+ {
398
+ "epoch": 0.0687116564417178,
399
+ "grad_norm": 0.09460455179214478,
400
+ "learning_rate": 4.2177914110429445e-06,
401
+ "loss": 1.4381420612335205,
402
+ "step": 56
403
+ },
404
+ {
405
+ "epoch": 0.06993865030674846,
406
+ "grad_norm": 0.08306065201759338,
407
+ "learning_rate": 4.294478527607362e-06,
408
+ "loss": 1.373732328414917,
409
+ "step": 57
410
+ },
411
+ {
412
+ "epoch": 0.07116564417177915,
413
+ "grad_norm": 0.09126616269350052,
414
+ "learning_rate": 4.371165644171779e-06,
415
+ "loss": 1.3677804470062256,
416
+ "step": 58
417
+ },
418
+ {
419
+ "epoch": 0.07239263803680981,
420
+ "grad_norm": 0.10076750069856644,
421
+ "learning_rate": 4.447852760736196e-06,
422
+ "loss": 1.517656683921814,
423
+ "step": 59
424
+ },
425
+ {
426
+ "epoch": 0.0736196319018405,
427
+ "grad_norm": 0.09846878796815872,
428
+ "learning_rate": 4.524539877300614e-06,
429
+ "loss": 1.4145147800445557,
430
+ "step": 60
431
+ },
432
+ {
433
+ "epoch": 0.07484662576687116,
434
+ "grad_norm": 0.09265545010566711,
435
+ "learning_rate": 4.601226993865031e-06,
436
+ "loss": 1.3576478958129883,
437
+ "step": 61
438
+ },
439
+ {
440
+ "epoch": 0.07607361963190185,
441
+ "grad_norm": 0.08007940649986267,
442
+ "learning_rate": 4.6779141104294485e-06,
443
+ "loss": 1.2068111896514893,
444
+ "step": 62
445
+ },
446
+ {
447
+ "epoch": 0.07730061349693251,
448
+ "grad_norm": 0.08868364989757538,
449
+ "learning_rate": 4.7546012269938654e-06,
450
+ "loss": 1.6644642353057861,
451
+ "step": 63
452
+ },
453
+ {
454
+ "epoch": 0.0785276073619632,
455
+ "grad_norm": 0.09127894788980484,
456
+ "learning_rate": 4.831288343558282e-06,
457
+ "loss": 1.3978849649429321,
458
+ "step": 64
459
+ },
460
+ {
461
+ "epoch": 0.07975460122699386,
462
+ "grad_norm": 0.08983105421066284,
463
+ "learning_rate": 4.9079754601227e-06,
464
+ "loss": 1.340972661972046,
465
+ "step": 65
466
+ },
467
+ {
468
+ "epoch": 0.08098159509202454,
469
+ "grad_norm": 0.1077636331319809,
470
+ "learning_rate": 4.984662576687116e-06,
471
+ "loss": 1.2714447975158691,
472
+ "step": 66
473
+ },
474
+ {
475
+ "epoch": 0.08220858895705521,
476
+ "grad_norm": 0.08831098675727844,
477
+ "learning_rate": 5.061349693251534e-06,
478
+ "loss": 1.3161592483520508,
479
+ "step": 67
480
+ },
481
+ {
482
+ "epoch": 0.0834355828220859,
483
+ "grad_norm": 0.09379947930574417,
484
+ "learning_rate": 5.138036809815952e-06,
485
+ "loss": 1.4364651441574097,
486
+ "step": 68
487
+ },
488
+ {
489
+ "epoch": 0.08466257668711656,
490
+ "grad_norm": 0.09187101572751999,
491
+ "learning_rate": 5.214723926380368e-06,
492
+ "loss": 1.349583387374878,
493
+ "step": 69
494
+ },
495
+ {
496
+ "epoch": 0.08588957055214724,
497
+ "grad_norm": 0.0946347787976265,
498
+ "learning_rate": 5.2914110429447855e-06,
499
+ "loss": 1.4100244045257568,
500
+ "step": 70
501
+ },
502
+ {
503
+ "epoch": 0.08711656441717791,
504
+ "grad_norm": 0.10264250636100769,
505
+ "learning_rate": 5.368098159509203e-06,
506
+ "loss": 1.3347933292388916,
507
+ "step": 71
508
+ },
509
+ {
510
+ "epoch": 0.08834355828220859,
511
+ "grad_norm": 0.11045140773057938,
512
+ "learning_rate": 5.444785276073619e-06,
513
+ "loss": 1.274242877960205,
514
+ "step": 72
515
+ },
516
+ {
517
+ "epoch": 0.08957055214723926,
518
+ "grad_norm": 0.10263700783252716,
519
+ "learning_rate": 5.521472392638037e-06,
520
+ "loss": 1.4386701583862305,
521
+ "step": 73
522
+ },
523
+ {
524
+ "epoch": 0.09079754601226994,
525
+ "grad_norm": 0.08373517543077469,
526
+ "learning_rate": 5.598159509202455e-06,
527
+ "loss": 1.1995660066604614,
528
+ "step": 74
529
+ },
530
+ {
531
+ "epoch": 0.09202453987730061,
532
+ "grad_norm": 0.09021089226007462,
533
+ "learning_rate": 5.674846625766871e-06,
534
+ "loss": 1.4014225006103516,
535
+ "step": 75
536
+ },
537
+ {
538
+ "epoch": 0.09325153374233129,
539
+ "grad_norm": 0.09492361545562744,
540
+ "learning_rate": 5.751533742331289e-06,
541
+ "loss": 1.2872931957244873,
542
+ "step": 76
543
+ },
544
+ {
545
+ "epoch": 0.09447852760736196,
546
+ "grad_norm": 0.09280592948198318,
547
+ "learning_rate": 5.828220858895706e-06,
548
+ "loss": 1.242187261581421,
549
+ "step": 77
550
+ },
551
+ {
552
+ "epoch": 0.09570552147239264,
553
+ "grad_norm": 0.09655126184225082,
554
+ "learning_rate": 5.9049079754601225e-06,
555
+ "loss": 1.347137451171875,
556
+ "step": 78
557
+ },
558
+ {
559
+ "epoch": 0.09693251533742331,
560
+ "grad_norm": 0.09948047995567322,
561
+ "learning_rate": 5.98159509202454e-06,
562
+ "loss": 1.4971836805343628,
563
+ "step": 79
564
+ },
565
+ {
566
+ "epoch": 0.09815950920245399,
567
+ "grad_norm": 0.10510187596082687,
568
+ "learning_rate": 6.058282208588957e-06,
569
+ "loss": 1.2555480003356934,
570
+ "step": 80
571
+ },
572
+ {
573
+ "epoch": 0.09938650306748466,
574
+ "grad_norm": 0.10650225728750229,
575
+ "learning_rate": 6.134969325153374e-06,
576
+ "loss": 1.4250295162200928,
577
+ "step": 81
578
+ },
579
+ {
580
+ "epoch": 0.10061349693251534,
581
+ "grad_norm": 0.09000247716903687,
582
+ "learning_rate": 6.211656441717792e-06,
583
+ "loss": 1.354022741317749,
584
+ "step": 82
585
+ },
586
+ {
587
+ "epoch": 0.10184049079754601,
588
+ "grad_norm": 0.09123768657445908,
589
+ "learning_rate": 6.288343558282209e-06,
590
+ "loss": 1.3554480075836182,
591
+ "step": 83
592
+ },
593
+ {
594
+ "epoch": 0.10306748466257669,
595
+ "grad_norm": 0.09282216429710388,
596
+ "learning_rate": 6.365030674846626e-06,
597
+ "loss": 1.2707648277282715,
598
+ "step": 84
599
+ },
600
+ {
601
+ "epoch": 0.10429447852760736,
602
+ "grad_norm": 0.09543449431657791,
603
+ "learning_rate": 6.4417177914110434e-06,
604
+ "loss": 1.2168090343475342,
605
+ "step": 85
606
+ },
607
+ {
608
+ "epoch": 0.10552147239263804,
609
+ "grad_norm": 0.09843157976865768,
610
+ "learning_rate": 6.51840490797546e-06,
611
+ "loss": 1.260111927986145,
612
+ "step": 86
613
+ },
614
+ {
615
+ "epoch": 0.1067484662576687,
616
+ "grad_norm": 0.11217689514160156,
617
+ "learning_rate": 6.595092024539877e-06,
618
+ "loss": 1.3698742389678955,
619
+ "step": 87
620
+ },
621
+ {
622
+ "epoch": 0.10797546012269939,
623
+ "grad_norm": 0.09749456495046616,
624
+ "learning_rate": 6.671779141104295e-06,
625
+ "loss": 1.3580715656280518,
626
+ "step": 88
627
+ },
628
+ {
629
+ "epoch": 0.10920245398773006,
630
+ "grad_norm": 0.10477589815855026,
631
+ "learning_rate": 6.748466257668712e-06,
632
+ "loss": 1.3862428665161133,
633
+ "step": 89
634
+ },
635
+ {
636
+ "epoch": 0.11042944785276074,
637
+ "grad_norm": 0.10288488119840622,
638
+ "learning_rate": 6.825153374233129e-06,
639
+ "loss": 1.265109658241272,
640
+ "step": 90
641
+ },
642
+ {
643
+ "epoch": 0.1116564417177914,
644
+ "grad_norm": 0.09684387594461441,
645
+ "learning_rate": 6.901840490797547e-06,
646
+ "loss": 1.323692798614502,
647
+ "step": 91
648
+ },
649
+ {
650
+ "epoch": 0.11288343558282209,
651
+ "grad_norm": 0.10333628952503204,
652
+ "learning_rate": 6.9785276073619635e-06,
653
+ "loss": 1.389283537864685,
654
+ "step": 92
655
+ },
656
+ {
657
+ "epoch": 0.11411042944785275,
658
+ "grad_norm": 0.10879465192556381,
659
+ "learning_rate": 7.05521472392638e-06,
660
+ "loss": 1.395522117614746,
661
+ "step": 93
662
+ },
663
+ {
664
+ "epoch": 0.11533742331288344,
665
+ "grad_norm": 0.09067656099796295,
666
+ "learning_rate": 7.131901840490798e-06,
667
+ "loss": 1.17533540725708,
668
+ "step": 94
669
+ },
670
+ {
671
+ "epoch": 0.1165644171779141,
672
+ "grad_norm": 0.09927186369895935,
673
+ "learning_rate": 7.208588957055215e-06,
674
+ "loss": 1.3598401546478271,
675
+ "step": 95
676
+ },
677
+ {
678
+ "epoch": 0.11779141104294479,
679
+ "grad_norm": 0.1038227528333664,
680
+ "learning_rate": 7.285276073619632e-06,
681
+ "loss": 1.392061471939087,
682
+ "step": 96
683
+ },
684
+ {
685
+ "epoch": 0.11901840490797547,
686
+ "grad_norm": 0.09537981450557709,
687
+ "learning_rate": 7.36196319018405e-06,
688
+ "loss": 1.2397699356079102,
689
+ "step": 97
690
+ },
691
+ {
692
+ "epoch": 0.12024539877300613,
693
+ "grad_norm": 0.08968851715326309,
694
+ "learning_rate": 7.438650306748467e-06,
695
+ "loss": 1.295367956161499,
696
+ "step": 98
697
+ },
698
+ {
699
+ "epoch": 0.12147239263803682,
700
+ "grad_norm": 0.10279387980699539,
701
+ "learning_rate": 7.5153374233128836e-06,
702
+ "loss": 1.2284561395645142,
703
+ "step": 99
704
+ },
705
+ {
706
+ "epoch": 0.12269938650306748,
707
+ "grad_norm": 0.09359531849622726,
708
+ "learning_rate": 7.592024539877301e-06,
709
+ "loss": 1.2822641134262085,
710
+ "step": 100
711
+ },
712
+ {
713
+ "epoch": 0.12392638036809817,
714
+ "grad_norm": 0.07995408773422241,
715
+ "learning_rate": 7.668711656441718e-06,
716
+ "loss": 1.2809827327728271,
717
+ "step": 101
718
+ },
719
+ {
720
+ "epoch": 0.12515337423312883,
721
+ "grad_norm": 0.08832208067178726,
722
+ "learning_rate": 7.745398773006135e-06,
723
+ "loss": 1.3367336988449097,
724
+ "step": 102
725
+ },
726
+ {
727
+ "epoch": 0.1263803680981595,
728
+ "grad_norm": 0.09752494096755981,
729
+ "learning_rate": 7.822085889570554e-06,
730
+ "loss": 1.2547138929367065,
731
+ "step": 103
732
+ },
733
+ {
734
+ "epoch": 0.1276073619631902,
735
+ "grad_norm": 0.0881728082895279,
736
+ "learning_rate": 7.89877300613497e-06,
737
+ "loss": 1.3664880990982056,
738
+ "step": 104
739
+ },
740
+ {
741
+ "epoch": 0.12883435582822086,
742
+ "grad_norm": 0.0837954729795456,
743
+ "learning_rate": 7.975460122699386e-06,
744
+ "loss": 1.217215895652771,
745
+ "step": 105
746
+ },
747
+ {
748
+ "epoch": 0.13006134969325153,
749
+ "grad_norm": 0.09412740916013718,
750
+ "learning_rate": 8.052147239263803e-06,
751
+ "loss": 1.381158709526062,
752
+ "step": 106
753
+ },
754
+ {
755
+ "epoch": 0.1312883435582822,
756
+ "grad_norm": 0.08788621425628662,
757
+ "learning_rate": 8.128834355828221e-06,
758
+ "loss": 1.1602442264556885,
759
+ "step": 107
760
+ },
761
+ {
762
+ "epoch": 0.1325153374233129,
763
+ "grad_norm": 0.07725191116333008,
764
+ "learning_rate": 8.205521472392638e-06,
765
+ "loss": 1.263871669769287,
766
+ "step": 108
767
+ },
768
+ {
769
+ "epoch": 0.13374233128834356,
770
+ "grad_norm": 0.086419977247715,
771
+ "learning_rate": 8.282208588957055e-06,
772
+ "loss": 1.3760960102081299,
773
+ "step": 109
774
+ },
775
+ {
776
+ "epoch": 0.13496932515337423,
777
+ "grad_norm": 0.09353740513324738,
778
+ "learning_rate": 8.358895705521474e-06,
779
+ "loss": 1.3359466791152954,
780
+ "step": 110
781
+ },
782
+ {
783
+ "epoch": 0.1361963190184049,
784
+ "grad_norm": 0.07447539269924164,
785
+ "learning_rate": 8.435582822085889e-06,
786
+ "loss": 1.2698533535003662,
787
+ "step": 111
788
+ },
789
+ {
790
+ "epoch": 0.1374233128834356,
791
+ "grad_norm": 0.10644813627004623,
792
+ "learning_rate": 8.512269938650306e-06,
793
+ "loss": 1.1220934391021729,
794
+ "step": 112
795
+ },
796
+ {
797
+ "epoch": 0.13865030674846626,
798
+ "grad_norm": 0.06901716440916061,
799
+ "learning_rate": 8.588957055214725e-06,
800
+ "loss": 1.1655170917510986,
801
+ "step": 113
802
+ },
803
+ {
804
+ "epoch": 0.13987730061349693,
805
+ "grad_norm": 0.0855763703584671,
806
+ "learning_rate": 8.665644171779141e-06,
807
+ "loss": 1.1486337184906006,
808
+ "step": 114
809
+ },
810
+ {
811
+ "epoch": 0.1411042944785276,
812
+ "grad_norm": 0.09067295491695404,
813
+ "learning_rate": 8.742331288343558e-06,
814
+ "loss": 1.1816922426223755,
815
+ "step": 115
816
+ },
817
+ {
818
+ "epoch": 0.1423312883435583,
819
+ "grad_norm": 0.0844568982720375,
820
+ "learning_rate": 8.819018404907977e-06,
821
+ "loss": 1.215814232826233,
822
+ "step": 116
823
+ },
824
+ {
825
+ "epoch": 0.14355828220858896,
826
+ "grad_norm": 0.06900807470083237,
827
+ "learning_rate": 8.895705521472392e-06,
828
+ "loss": 1.2119998931884766,
829
+ "step": 117
830
+ },
831
+ {
832
+ "epoch": 0.14478527607361963,
833
+ "grad_norm": 0.07901526242494583,
834
+ "learning_rate": 8.972392638036809e-06,
835
+ "loss": 1.2455894947052002,
836
+ "step": 118
837
+ },
838
+ {
839
+ "epoch": 0.1460122699386503,
840
+ "grad_norm": 0.07439431548118591,
841
+ "learning_rate": 9.049079754601228e-06,
842
+ "loss": 1.2518129348754883,
843
+ "step": 119
844
+ },
845
+ {
846
+ "epoch": 0.147239263803681,
847
+ "grad_norm": 0.08350270241498947,
848
+ "learning_rate": 9.125766871165645e-06,
849
+ "loss": 1.2978299856185913,
850
+ "step": 120
851
+ },
852
+ {
853
+ "epoch": 0.14846625766871166,
854
+ "grad_norm": 0.07621041685342789,
855
+ "learning_rate": 9.202453987730062e-06,
856
+ "loss": 1.2295458316802979,
857
+ "step": 121
858
+ },
859
+ {
860
+ "epoch": 0.14969325153374233,
861
+ "grad_norm": 0.08300689607858658,
862
+ "learning_rate": 9.27914110429448e-06,
863
+ "loss": 1.3470526933670044,
864
+ "step": 122
865
+ },
866
+ {
867
+ "epoch": 0.150920245398773,
868
+ "grad_norm": 0.0724540650844574,
869
+ "learning_rate": 9.355828220858897e-06,
870
+ "loss": 1.1989648342132568,
871
+ "step": 123
872
+ },
873
+ {
874
+ "epoch": 0.1521472392638037,
875
+ "grad_norm": 0.07644130289554596,
876
+ "learning_rate": 9.432515337423312e-06,
877
+ "loss": 1.2179319858551025,
878
+ "step": 124
879
+ },
880
+ {
881
+ "epoch": 0.15337423312883436,
882
+ "grad_norm": 0.07522676140069962,
883
+ "learning_rate": 9.509202453987731e-06,
884
+ "loss": 1.2217178344726562,
885
+ "step": 125
886
+ },
887
+ {
888
+ "epoch": 0.15460122699386503,
889
+ "grad_norm": 0.07084149122238159,
890
+ "learning_rate": 9.585889570552148e-06,
891
+ "loss": 1.1610798835754395,
892
+ "step": 126
893
+ },
894
+ {
895
+ "epoch": 0.1558282208588957,
896
+ "grad_norm": 0.07349451631307602,
897
+ "learning_rate": 9.662576687116565e-06,
898
+ "loss": 1.2103276252746582,
899
+ "step": 127
900
+ },
901
+ {
902
+ "epoch": 0.1570552147239264,
903
+ "grad_norm": 0.07179653644561768,
904
+ "learning_rate": 9.739263803680983e-06,
905
+ "loss": 1.247308373451233,
906
+ "step": 128
907
+ },
908
+ {
909
+ "epoch": 0.15828220858895706,
910
+ "grad_norm": 0.0752854123711586,
911
+ "learning_rate": 9.8159509202454e-06,
912
+ "loss": 1.2092695236206055,
913
+ "step": 129
914
+ },
915
+ {
916
+ "epoch": 0.15950920245398773,
917
+ "grad_norm": 0.06951854377985,
918
+ "learning_rate": 9.892638036809815e-06,
919
+ "loss": 1.1004083156585693,
920
+ "step": 130
921
+ },
922
+ {
923
+ "epoch": 0.1607361963190184,
924
+ "grad_norm": 0.07509293407201767,
925
+ "learning_rate": 9.969325153374232e-06,
926
+ "loss": 1.2282463312149048,
927
+ "step": 131
928
+ },
929
+ {
930
+ "epoch": 0.1619631901840491,
931
+ "grad_norm": 0.07161499559879303,
932
+ "learning_rate": 1.0046012269938651e-05,
933
+ "loss": 1.312925100326538,
934
+ "step": 132
935
+ },
936
+ {
937
+ "epoch": 0.16319018404907976,
938
+ "grad_norm": 0.07607123255729675,
939
+ "learning_rate": 1.0122699386503068e-05,
940
+ "loss": 1.3124730587005615,
941
+ "step": 133
942
+ },
943
+ {
944
+ "epoch": 0.16441717791411042,
945
+ "grad_norm": 0.07267063856124878,
946
+ "learning_rate": 1.0199386503067485e-05,
947
+ "loss": 1.1374307870864868,
948
+ "step": 134
949
+ },
950
+ {
951
+ "epoch": 0.1656441717791411,
952
+ "grad_norm": 0.07660125941038132,
953
+ "learning_rate": 1.0276073619631903e-05,
954
+ "loss": 1.1595758199691772,
955
+ "step": 135
956
+ },
957
+ {
958
+ "epoch": 0.1668711656441718,
959
+ "grad_norm": 0.07634468376636505,
960
+ "learning_rate": 1.0352760736196319e-05,
961
+ "loss": 1.1585400104522705,
962
+ "step": 136
963
+ },
964
+ {
965
+ "epoch": 0.16809815950920245,
966
+ "grad_norm": 0.07540547847747803,
967
+ "learning_rate": 1.0429447852760736e-05,
968
+ "loss": 1.2559616565704346,
969
+ "step": 137
970
+ },
971
+ {
972
+ "epoch": 0.16932515337423312,
973
+ "grad_norm": 0.06400100141763687,
974
+ "learning_rate": 1.0506134969325154e-05,
975
+ "loss": 1.2289286851882935,
976
+ "step": 138
977
+ },
978
+ {
979
+ "epoch": 0.1705521472392638,
980
+ "grad_norm": 0.08367716521024704,
981
+ "learning_rate": 1.0582822085889571e-05,
982
+ "loss": 1.169529676437378,
983
+ "step": 139
984
+ },
985
+ {
986
+ "epoch": 0.17177914110429449,
987
+ "grad_norm": 0.07152585685253143,
988
+ "learning_rate": 1.0659509202453988e-05,
989
+ "loss": 1.2492527961730957,
990
+ "step": 140
991
+ },
992
+ {
993
+ "epoch": 0.17300613496932515,
994
+ "grad_norm": 0.07628720253705978,
995
+ "learning_rate": 1.0736196319018407e-05,
996
+ "loss": 1.169081687927246,
997
+ "step": 141
998
+ },
999
+ {
1000
+ "epoch": 0.17423312883435582,
1001
+ "grad_norm": 0.07779771089553833,
1002
+ "learning_rate": 1.0812883435582823e-05,
1003
+ "loss": 1.294127106666565,
1004
+ "step": 142
1005
+ },
1006
+ {
1007
+ "epoch": 0.1754601226993865,
1008
+ "grad_norm": 0.0831819698214531,
1009
+ "learning_rate": 1.0889570552147239e-05,
1010
+ "loss": 1.191867470741272,
1011
+ "step": 143
1012
+ },
1013
+ {
1014
+ "epoch": 0.17668711656441718,
1015
+ "grad_norm": 0.07639773935079575,
1016
+ "learning_rate": 1.0966257668711657e-05,
1017
+ "loss": 1.1590148210525513,
1018
+ "step": 144
1019
+ },
1020
+ {
1021
+ "epoch": 0.17791411042944785,
1022
+ "grad_norm": 0.06675959378480911,
1023
+ "learning_rate": 1.1042944785276074e-05,
1024
+ "loss": 1.1139436960220337,
1025
+ "step": 145
1026
+ },
1027
+ {
1028
+ "epoch": 0.17914110429447852,
1029
+ "grad_norm": 0.07247823476791382,
1030
+ "learning_rate": 1.1119631901840491e-05,
1031
+ "loss": 1.1570465564727783,
1032
+ "step": 146
1033
+ },
1034
+ {
1035
+ "epoch": 0.18036809815950922,
1036
+ "grad_norm": 0.0793304368853569,
1037
+ "learning_rate": 1.119631901840491e-05,
1038
+ "loss": 1.3373682498931885,
1039
+ "step": 147
1040
+ },
1041
+ {
1042
+ "epoch": 0.18159509202453988,
1043
+ "grad_norm": 0.08718875050544739,
1044
+ "learning_rate": 1.1273006134969327e-05,
1045
+ "loss": 1.1467812061309814,
1046
+ "step": 148
1047
+ },
1048
+ {
1049
+ "epoch": 0.18282208588957055,
1050
+ "grad_norm": 0.07898775488138199,
1051
+ "learning_rate": 1.1349693251533742e-05,
1052
+ "loss": 1.1690555810928345,
1053
+ "step": 149
1054
+ },
1055
+ {
1056
+ "epoch": 0.18404907975460122,
1057
+ "grad_norm": 0.0748438760638237,
1058
+ "learning_rate": 1.142638036809816e-05,
1059
+ "loss": 1.286328673362732,
1060
+ "step": 150
1061
+ },
1062
+ {
1063
+ "epoch": 0.18527607361963191,
1064
+ "grad_norm": 0.07474487274885178,
1065
+ "learning_rate": 1.1503067484662577e-05,
1066
+ "loss": 1.2229455709457397,
1067
+ "step": 151
1068
+ },
1069
+ {
1070
+ "epoch": 0.18650306748466258,
1071
+ "grad_norm": 0.07763181626796722,
1072
+ "learning_rate": 1.1579754601226994e-05,
1073
+ "loss": 1.131850242614746,
1074
+ "step": 152
1075
+ },
1076
+ {
1077
+ "epoch": 0.18773006134969325,
1078
+ "grad_norm": 0.07221898436546326,
1079
+ "learning_rate": 1.1656441717791411e-05,
1080
+ "loss": 1.1305968761444092,
1081
+ "step": 153
1082
+ },
1083
+ {
1084
+ "epoch": 0.18895705521472392,
1085
+ "grad_norm": 0.07231416553258896,
1086
+ "learning_rate": 1.173312883435583e-05,
1087
+ "loss": 1.2581983804702759,
1088
+ "step": 154
1089
+ },
1090
+ {
1091
+ "epoch": 0.1901840490797546,
1092
+ "grad_norm": 0.0842069461941719,
1093
+ "learning_rate": 1.1809815950920245e-05,
1094
+ "loss": 1.1937918663024902,
1095
+ "step": 155
1096
+ },
1097
+ {
1098
+ "epoch": 0.19141104294478528,
1099
+ "grad_norm": 0.07470091432332993,
1100
+ "learning_rate": 1.1886503067484662e-05,
1101
+ "loss": 1.1321823596954346,
1102
+ "step": 156
1103
+ },
1104
+ {
1105
+ "epoch": 0.19263803680981595,
1106
+ "grad_norm": 0.07048668712377548,
1107
+ "learning_rate": 1.196319018404908e-05,
1108
+ "loss": 1.186060905456543,
1109
+ "step": 157
1110
+ },
1111
+ {
1112
+ "epoch": 0.19386503067484662,
1113
+ "grad_norm": 0.15513236820697784,
1114
+ "learning_rate": 1.2039877300613497e-05,
1115
+ "loss": 1.1082484722137451,
1116
+ "step": 158
1117
+ },
1118
+ {
1119
+ "epoch": 0.1950920245398773,
1120
+ "grad_norm": 0.07174337655305862,
1121
+ "learning_rate": 1.2116564417177914e-05,
1122
+ "loss": 1.0334205627441406,
1123
+ "step": 159
1124
+ },
1125
+ {
1126
+ "epoch": 0.19631901840490798,
1127
+ "grad_norm": 0.08637174218893051,
1128
+ "learning_rate": 1.2193251533742333e-05,
1129
+ "loss": 1.2800869941711426,
1130
+ "step": 160
1131
+ },
1132
+ {
1133
+ "epoch": 0.19754601226993865,
1134
+ "grad_norm": 0.07679158449172974,
1135
+ "learning_rate": 1.2269938650306748e-05,
1136
+ "loss": 1.0969552993774414,
1137
+ "step": 161
1138
+ },
1139
+ {
1140
+ "epoch": 0.19877300613496932,
1141
+ "grad_norm": 0.07408922910690308,
1142
+ "learning_rate": 1.2346625766871165e-05,
1143
+ "loss": 1.0420407056808472,
1144
+ "step": 162
1145
+ },
1146
+ {
1147
+ "epoch": 0.2,
1148
+ "grad_norm": 0.07723739743232727,
1149
+ "learning_rate": 1.2423312883435584e-05,
1150
+ "loss": 1.2170075178146362,
1151
+ "step": 163
1152
+ },
1153
+ {
1154
+ "epoch": 0.20122699386503068,
1155
+ "grad_norm": 0.09736181795597076,
1156
+ "learning_rate": 1.25e-05,
1157
+ "loss": 1.092185139656067,
1158
+ "step": 164
1159
+ },
1160
+ {
1161
+ "epoch": 0.20245398773006135,
1162
+ "grad_norm": 0.08557657897472382,
1163
+ "learning_rate": 1.2576687116564418e-05,
1164
+ "loss": 1.1464152336120605,
1165
+ "step": 165
1166
+ },
1167
+ {
1168
+ "epoch": 0.20368098159509201,
1169
+ "grad_norm": 0.08090342581272125,
1170
+ "learning_rate": 1.2653374233128834e-05,
1171
+ "loss": 1.0590012073516846,
1172
+ "step": 166
1173
+ },
1174
+ {
1175
+ "epoch": 0.2049079754601227,
1176
+ "grad_norm": 0.0782066136598587,
1177
+ "learning_rate": 1.2730061349693251e-05,
1178
+ "loss": 1.093062162399292,
1179
+ "step": 167
1180
+ },
1181
+ {
1182
+ "epoch": 0.20613496932515338,
1183
+ "grad_norm": 0.07601075619459152,
1184
+ "learning_rate": 1.280674846625767e-05,
1185
+ "loss": 1.0593127012252808,
1186
+ "step": 168
1187
+ },
1188
+ {
1189
+ "epoch": 0.20736196319018405,
1190
+ "grad_norm": 0.07887541502714157,
1191
+ "learning_rate": 1.2883435582822087e-05,
1192
+ "loss": 1.1101746559143066,
1193
+ "step": 169
1194
+ },
1195
+ {
1196
+ "epoch": 0.2085889570552147,
1197
+ "grad_norm": 0.08362209796905518,
1198
+ "learning_rate": 1.2960122699386504e-05,
1199
+ "loss": 1.127945065498352,
1200
+ "step": 170
1201
+ },
1202
+ {
1203
+ "epoch": 0.2098159509202454,
1204
+ "grad_norm": 0.07187578082084656,
1205
+ "learning_rate": 1.303680981595092e-05,
1206
+ "loss": 0.9966148138046265,
1207
+ "step": 171
1208
+ },
1209
+ {
1210
+ "epoch": 0.21104294478527608,
1211
+ "grad_norm": 0.08967337757349014,
1212
+ "learning_rate": 1.3113496932515338e-05,
1213
+ "loss": 1.2105052471160889,
1214
+ "step": 172
1215
+ },
1216
+ {
1217
+ "epoch": 0.21226993865030674,
1218
+ "grad_norm": 0.08062509447336197,
1219
+ "learning_rate": 1.3190184049079754e-05,
1220
+ "loss": 1.124169111251831,
1221
+ "step": 173
1222
+ },
1223
+ {
1224
+ "epoch": 0.2134969325153374,
1225
+ "grad_norm": 0.08591330051422119,
1226
+ "learning_rate": 1.3266871165644173e-05,
1227
+ "loss": 1.2784122228622437,
1228
+ "step": 174
1229
+ },
1230
+ {
1231
+ "epoch": 0.2147239263803681,
1232
+ "grad_norm": 0.07611163705587387,
1233
+ "learning_rate": 1.334355828220859e-05,
1234
+ "loss": 1.3006565570831299,
1235
+ "step": 175
1236
+ },
1237
+ {
1238
+ "epoch": 0.21595092024539878,
1239
+ "grad_norm": 0.09474623948335648,
1240
+ "learning_rate": 1.3420245398773007e-05,
1241
+ "loss": 1.0671237707138062,
1242
+ "step": 176
1243
+ },
1244
+ {
1245
+ "epoch": 0.21717791411042944,
1246
+ "grad_norm": 0.08161427825689316,
1247
+ "learning_rate": 1.3496932515337424e-05,
1248
+ "loss": 1.1499321460723877,
1249
+ "step": 177
1250
+ },
1251
+ {
1252
+ "epoch": 0.2184049079754601,
1253
+ "grad_norm": 0.0793696790933609,
1254
+ "learning_rate": 1.357361963190184e-05,
1255
+ "loss": 1.1806647777557373,
1256
+ "step": 178
1257
+ },
1258
+ {
1259
+ "epoch": 0.2196319018404908,
1260
+ "grad_norm": 0.07412323355674744,
1261
+ "learning_rate": 1.3650306748466258e-05,
1262
+ "loss": 1.1109105348587036,
1263
+ "step": 179
1264
+ },
1265
+ {
1266
+ "epoch": 0.22085889570552147,
1267
+ "grad_norm": 0.09603255242109299,
1268
+ "learning_rate": 1.3726993865030676e-05,
1269
+ "loss": 1.0471618175506592,
1270
+ "step": 180
1271
+ },
1272
+ {
1273
+ "epoch": 0.22208588957055214,
1274
+ "grad_norm": 0.08905616402626038,
1275
+ "learning_rate": 1.3803680981595093e-05,
1276
+ "loss": 1.1427825689315796,
1277
+ "step": 181
1278
+ },
1279
+ {
1280
+ "epoch": 0.2233128834355828,
1281
+ "grad_norm": 0.09254536777734756,
1282
+ "learning_rate": 1.388036809815951e-05,
1283
+ "loss": 1.1951453685760498,
1284
+ "step": 182
1285
+ },
1286
+ {
1287
+ "epoch": 0.2245398773006135,
1288
+ "grad_norm": 0.08542929589748383,
1289
+ "learning_rate": 1.3957055214723927e-05,
1290
+ "loss": 1.1082501411437988,
1291
+ "step": 183
1292
+ },
1293
+ {
1294
+ "epoch": 0.22576687116564417,
1295
+ "grad_norm": 0.08667684346437454,
1296
+ "learning_rate": 1.4033742331288344e-05,
1297
+ "loss": 1.172175645828247,
1298
+ "step": 184
1299
+ },
1300
+ {
1301
+ "epoch": 0.22699386503067484,
1302
+ "grad_norm": 0.08812960237264633,
1303
+ "learning_rate": 1.411042944785276e-05,
1304
+ "loss": 1.113857626914978,
1305
+ "step": 185
1306
+ },
1307
+ {
1308
+ "epoch": 0.2282208588957055,
1309
+ "grad_norm": 0.08759750425815582,
1310
+ "learning_rate": 1.418711656441718e-05,
1311
+ "loss": 1.0720980167388916,
1312
+ "step": 186
1313
+ },
1314
+ {
1315
+ "epoch": 0.2294478527607362,
1316
+ "grad_norm": 0.07663634419441223,
1317
+ "learning_rate": 1.4263803680981596e-05,
1318
+ "loss": 1.1362175941467285,
1319
+ "step": 187
1320
+ },
1321
+ {
1322
+ "epoch": 0.23067484662576687,
1323
+ "grad_norm": 0.0884539932012558,
1324
+ "learning_rate": 1.4340490797546013e-05,
1325
+ "loss": 1.030583381652832,
1326
+ "step": 188
1327
+ },
1328
+ {
1329
+ "epoch": 0.23190184049079754,
1330
+ "grad_norm": 0.07207372039556503,
1331
+ "learning_rate": 1.441717791411043e-05,
1332
+ "loss": 1.1791778802871704,
1333
+ "step": 189
1334
+ },
1335
+ {
1336
+ "epoch": 0.2331288343558282,
1337
+ "grad_norm": 0.09309247136116028,
1338
+ "learning_rate": 1.4493865030674847e-05,
1339
+ "loss": 1.1558406352996826,
1340
+ "step": 190
1341
+ },
1342
+ {
1343
+ "epoch": 0.2343558282208589,
1344
+ "grad_norm": 0.08264083415269852,
1345
+ "learning_rate": 1.4570552147239264e-05,
1346
+ "loss": 1.1436799764633179,
1347
+ "step": 191
1348
+ },
1349
+ {
1350
+ "epoch": 0.23558282208588957,
1351
+ "grad_norm": 0.08998411893844604,
1352
+ "learning_rate": 1.4647239263803681e-05,
1353
+ "loss": 0.9984644055366516,
1354
+ "step": 192
1355
+ },
1356
+ {
1357
+ "epoch": 0.23680981595092024,
1358
+ "grad_norm": 0.09304746985435486,
1359
+ "learning_rate": 1.47239263803681e-05,
1360
+ "loss": 1.1322174072265625,
1361
+ "step": 193
1362
+ },
1363
+ {
1364
+ "epoch": 0.23803680981595093,
1365
+ "grad_norm": 0.09598280489444733,
1366
+ "learning_rate": 1.4800613496932516e-05,
1367
+ "loss": 1.103399634361267,
1368
+ "step": 194
1369
+ },
1370
+ {
1371
+ "epoch": 0.2392638036809816,
1372
+ "grad_norm": 0.09808170050382614,
1373
+ "learning_rate": 1.4877300613496933e-05,
1374
+ "loss": 1.1747708320617676,
1375
+ "step": 195
1376
+ },
1377
+ {
1378
+ "epoch": 0.24049079754601227,
1379
+ "grad_norm": 0.08757835626602173,
1380
+ "learning_rate": 1.495398773006135e-05,
1381
+ "loss": 1.135438323020935,
1382
+ "step": 196
1383
+ },
1384
+ {
1385
+ "epoch": 0.24171779141104294,
1386
+ "grad_norm": 0.08952710032463074,
1387
+ "learning_rate": 1.5030674846625767e-05,
1388
+ "loss": 1.0426082611083984,
1389
+ "step": 197
1390
+ },
1391
+ {
1392
+ "epoch": 0.24294478527607363,
1393
+ "grad_norm": 0.09966392815113068,
1394
+ "learning_rate": 1.5107361963190184e-05,
1395
+ "loss": 1.2231348752975464,
1396
+ "step": 198
1397
+ },
1398
+ {
1399
+ "epoch": 0.2441717791411043,
1400
+ "grad_norm": 0.08228705823421478,
1401
+ "learning_rate": 1.5184049079754603e-05,
1402
+ "loss": 1.0428478717803955,
1403
+ "step": 199
1404
+ },
1405
+ {
1406
+ "epoch": 0.24539877300613497,
1407
+ "grad_norm": 0.09374398738145828,
1408
+ "learning_rate": 1.526073619631902e-05,
1409
+ "loss": 1.057676076889038,
1410
+ "step": 200
1411
+ },
1412
+ {
1413
+ "epoch": 0.24662576687116564,
1414
+ "grad_norm": 0.08426011353731155,
1415
+ "learning_rate": 1.5337423312883436e-05,
1416
+ "loss": 1.1756813526153564,
1417
+ "step": 201
1418
+ },
1419
+ {
1420
+ "epoch": 0.24785276073619633,
1421
+ "grad_norm": 0.08629194647073746,
1422
+ "learning_rate": 1.5414110429447852e-05,
1423
+ "loss": 1.1612563133239746,
1424
+ "step": 202
1425
+ },
1426
+ {
1427
+ "epoch": 0.249079754601227,
1428
+ "grad_norm": 0.1029244139790535,
1429
+ "learning_rate": 1.549079754601227e-05,
1430
+ "loss": 1.0064386129379272,
1431
+ "step": 203
1432
+ },
1433
+ {
1434
+ "epoch": 0.25030674846625767,
1435
+ "grad_norm": 0.08551669865846634,
1436
+ "learning_rate": 1.5567484662576686e-05,
1437
+ "loss": 1.2138303518295288,
1438
+ "step": 204
1439
+ },
1440
+ {
1441
+ "epoch": 0.25153374233128833,
1442
+ "grad_norm": 0.08521132916212082,
1443
+ "learning_rate": 1.5644171779141108e-05,
1444
+ "loss": 1.0655517578125,
1445
+ "step": 205
1446
+ },
1447
+ {
1448
+ "epoch": 0.252760736196319,
1449
+ "grad_norm": 0.0902361199259758,
1450
+ "learning_rate": 1.5720858895705523e-05,
1451
+ "loss": 1.1724493503570557,
1452
+ "step": 206
1453
+ },
1454
+ {
1455
+ "epoch": 0.25398773006134967,
1456
+ "grad_norm": 0.0905335545539856,
1457
+ "learning_rate": 1.579754601226994e-05,
1458
+ "loss": 1.0633145570755005,
1459
+ "step": 207
1460
+ },
1461
+ {
1462
+ "epoch": 0.2552147239263804,
1463
+ "grad_norm": 0.10272892564535141,
1464
+ "learning_rate": 1.5874233128834357e-05,
1465
+ "loss": 1.034580111503601,
1466
+ "step": 208
1467
+ },
1468
+ {
1469
+ "epoch": 0.25644171779141106,
1470
+ "grad_norm": 0.08878074586391449,
1471
+ "learning_rate": 1.5950920245398772e-05,
1472
+ "loss": 1.1094441413879395,
1473
+ "step": 209
1474
+ },
1475
+ {
1476
+ "epoch": 0.25766871165644173,
1477
+ "grad_norm": 0.09403765946626663,
1478
+ "learning_rate": 1.602760736196319e-05,
1479
+ "loss": 1.0656667947769165,
1480
+ "step": 210
1481
+ },
1482
+ {
1483
+ "epoch": 0.2588957055214724,
1484
+ "grad_norm": 0.10467862337827682,
1485
+ "learning_rate": 1.6104294478527606e-05,
1486
+ "loss": 1.1456003189086914,
1487
+ "step": 211
1488
+ },
1489
+ {
1490
+ "epoch": 0.26012269938650306,
1491
+ "grad_norm": 0.09688597917556763,
1492
+ "learning_rate": 1.6180981595092028e-05,
1493
+ "loss": 1.124730110168457,
1494
+ "step": 212
1495
+ },
1496
+ {
1497
+ "epoch": 0.26134969325153373,
1498
+ "grad_norm": 0.09323239326477051,
1499
+ "learning_rate": 1.6257668711656443e-05,
1500
+ "loss": 1.1329385042190552,
1501
+ "step": 213
1502
+ },
1503
+ {
1504
+ "epoch": 0.2625766871165644,
1505
+ "grad_norm": 0.10989087074995041,
1506
+ "learning_rate": 1.633435582822086e-05,
1507
+ "loss": 1.2534266710281372,
1508
+ "step": 214
1509
+ },
1510
+ {
1511
+ "epoch": 0.26380368098159507,
1512
+ "grad_norm": 0.10287831723690033,
1513
+ "learning_rate": 1.6411042944785277e-05,
1514
+ "loss": 1.0936100482940674,
1515
+ "step": 215
1516
+ },
1517
+ {
1518
+ "epoch": 0.2650306748466258,
1519
+ "grad_norm": 0.09896902740001678,
1520
+ "learning_rate": 1.6487730061349692e-05,
1521
+ "loss": 1.0307230949401855,
1522
+ "step": 216
1523
+ },
1524
+ {
1525
+ "epoch": 0.26625766871165646,
1526
+ "grad_norm": 0.10131006687879562,
1527
+ "learning_rate": 1.656441717791411e-05,
1528
+ "loss": 1.1618880033493042,
1529
+ "step": 217
1530
+ },
1531
+ {
1532
+ "epoch": 0.2674846625766871,
1533
+ "grad_norm": 0.10399206727743149,
1534
+ "learning_rate": 1.664110429447853e-05,
1535
+ "loss": 1.1229549646377563,
1536
+ "step": 218
1537
+ },
1538
+ {
1539
+ "epoch": 0.2687116564417178,
1540
+ "grad_norm": 0.10053529590368271,
1541
+ "learning_rate": 1.6717791411042948e-05,
1542
+ "loss": 1.1616917848587036,
1543
+ "step": 219
1544
+ },
1545
+ {
1546
+ "epoch": 0.26993865030674846,
1547
+ "grad_norm": 0.09529908001422882,
1548
+ "learning_rate": 1.6794478527607363e-05,
1549
+ "loss": 1.0340030193328857,
1550
+ "step": 220
1551
+ },
1552
+ {
1553
+ "epoch": 0.27116564417177913,
1554
+ "grad_norm": 0.1017000749707222,
1555
+ "learning_rate": 1.6871165644171778e-05,
1556
+ "loss": 1.0947508811950684,
1557
+ "step": 221
1558
+ },
1559
+ {
1560
+ "epoch": 0.2723926380368098,
1561
+ "grad_norm": 0.08638525754213333,
1562
+ "learning_rate": 1.6947852760736197e-05,
1563
+ "loss": 0.9798234701156616,
1564
+ "step": 222
1565
+ },
1566
+ {
1567
+ "epoch": 0.27361963190184047,
1568
+ "grad_norm": 0.10133469104766846,
1569
+ "learning_rate": 1.7024539877300612e-05,
1570
+ "loss": 1.0582237243652344,
1571
+ "step": 223
1572
+ },
1573
+ {
1574
+ "epoch": 0.2748466257668712,
1575
+ "grad_norm": 0.10113056749105453,
1576
+ "learning_rate": 1.7101226993865034e-05,
1577
+ "loss": 1.066800832748413,
1578
+ "step": 224
1579
+ },
1580
+ {
1581
+ "epoch": 0.27607361963190186,
1582
+ "grad_norm": 0.11324694007635117,
1583
+ "learning_rate": 1.717791411042945e-05,
1584
+ "loss": 1.1267540454864502,
1585
+ "step": 225
1586
+ },
1587
+ {
1588
+ "epoch": 0.2773006134969325,
1589
+ "grad_norm": 0.0964084267616272,
1590
+ "learning_rate": 1.7254601226993868e-05,
1591
+ "loss": 1.0880162715911865,
1592
+ "step": 226
1593
+ },
1594
+ {
1595
+ "epoch": 0.2785276073619632,
1596
+ "grad_norm": 0.10354617238044739,
1597
+ "learning_rate": 1.7331288343558283e-05,
1598
+ "loss": 1.150836706161499,
1599
+ "step": 227
1600
+ },
1601
+ {
1602
+ "epoch": 0.27975460122699386,
1603
+ "grad_norm": 0.10236093401908875,
1604
+ "learning_rate": 1.7407975460122698e-05,
1605
+ "loss": 0.9434213042259216,
1606
+ "step": 228
1607
+ },
1608
+ {
1609
+ "epoch": 0.2809815950920245,
1610
+ "grad_norm": 0.09847301989793777,
1611
+ "learning_rate": 1.7484662576687117e-05,
1612
+ "loss": 1.0203105211257935,
1613
+ "step": 229
1614
+ },
1615
+ {
1616
+ "epoch": 0.2822085889570552,
1617
+ "grad_norm": 0.1049259603023529,
1618
+ "learning_rate": 1.7561349693251535e-05,
1619
+ "loss": 1.0045392513275146,
1620
+ "step": 230
1621
+ },
1622
+ {
1623
+ "epoch": 0.28343558282208586,
1624
+ "grad_norm": 0.09362269192934036,
1625
+ "learning_rate": 1.7638036809815954e-05,
1626
+ "loss": 0.9903603196144104,
1627
+ "step": 231
1628
+ },
1629
+ {
1630
+ "epoch": 0.2846625766871166,
1631
+ "grad_norm": 0.08344310522079468,
1632
+ "learning_rate": 1.771472392638037e-05,
1633
+ "loss": 1.078027367591858,
1634
+ "step": 232
1635
+ },
1636
+ {
1637
+ "epoch": 0.28588957055214725,
1638
+ "grad_norm": 0.14989130198955536,
1639
+ "learning_rate": 1.7791411042944784e-05,
1640
+ "loss": 0.8905891180038452,
1641
+ "step": 233
1642
+ },
1643
+ {
1644
+ "epoch": 0.2871165644171779,
1645
+ "grad_norm": 0.11933330446481705,
1646
+ "learning_rate": 1.7868098159509203e-05,
1647
+ "loss": 1.0191709995269775,
1648
+ "step": 234
1649
+ },
1650
+ {
1651
+ "epoch": 0.2883435582822086,
1652
+ "grad_norm": 0.09907590597867966,
1653
+ "learning_rate": 1.7944785276073618e-05,
1654
+ "loss": 0.97510826587677,
1655
+ "step": 235
1656
+ },
1657
+ {
1658
+ "epoch": 0.28957055214723926,
1659
+ "grad_norm": 0.11432763934135437,
1660
+ "learning_rate": 1.8021472392638037e-05,
1661
+ "loss": 1.2066056728363037,
1662
+ "step": 236
1663
+ },
1664
+ {
1665
+ "epoch": 0.2907975460122699,
1666
+ "grad_norm": 0.09880590438842773,
1667
+ "learning_rate": 1.8098159509202455e-05,
1668
+ "loss": 1.0608339309692383,
1669
+ "step": 237
1670
+ },
1671
+ {
1672
+ "epoch": 0.2920245398773006,
1673
+ "grad_norm": 0.09431140124797821,
1674
+ "learning_rate": 1.8174846625766874e-05,
1675
+ "loss": 0.994207501411438,
1676
+ "step": 238
1677
+ },
1678
+ {
1679
+ "epoch": 0.29325153374233126,
1680
+ "grad_norm": 0.11933495849370956,
1681
+ "learning_rate": 1.825153374233129e-05,
1682
+ "loss": 0.9558770656585693,
1683
+ "step": 239
1684
+ },
1685
+ {
1686
+ "epoch": 0.294478527607362,
1687
+ "grad_norm": 0.11472880095243454,
1688
+ "learning_rate": 1.8328220858895704e-05,
1689
+ "loss": 1.0349183082580566,
1690
+ "step": 240
1691
+ },
1692
+ {
1693
+ "epoch": 0.29570552147239265,
1694
+ "grad_norm": 0.09867944568395615,
1695
+ "learning_rate": 1.8404907975460123e-05,
1696
+ "loss": 1.0970228910446167,
1697
+ "step": 241
1698
+ },
1699
+ {
1700
+ "epoch": 0.2969325153374233,
1701
+ "grad_norm": 0.10022281855344772,
1702
+ "learning_rate": 1.848159509202454e-05,
1703
+ "loss": 1.093743920326233,
1704
+ "step": 242
1705
+ },
1706
+ {
1707
+ "epoch": 0.298159509202454,
1708
+ "grad_norm": 0.12287919968366623,
1709
+ "learning_rate": 1.855828220858896e-05,
1710
+ "loss": 1.1307411193847656,
1711
+ "step": 243
1712
+ },
1713
+ {
1714
+ "epoch": 0.29938650306748466,
1715
+ "grad_norm": 0.09696297347545624,
1716
+ "learning_rate": 1.8634969325153376e-05,
1717
+ "loss": 1.1624317169189453,
1718
+ "step": 244
1719
+ },
1720
+ {
1721
+ "epoch": 0.3006134969325153,
1722
+ "grad_norm": 0.11268558353185654,
1723
+ "learning_rate": 1.8711656441717794e-05,
1724
+ "loss": 1.1427913904190063,
1725
+ "step": 245
1726
+ },
1727
+ {
1728
+ "epoch": 0.301840490797546,
1729
+ "grad_norm": 0.09550510346889496,
1730
+ "learning_rate": 1.878834355828221e-05,
1731
+ "loss": 1.0877418518066406,
1732
+ "step": 246
1733
+ },
1734
+ {
1735
+ "epoch": 0.3030674846625767,
1736
+ "grad_norm": 0.09527410566806793,
1737
+ "learning_rate": 1.8865030674846625e-05,
1738
+ "loss": 1.0739914178848267,
1739
+ "step": 247
1740
+ },
1741
+ {
1742
+ "epoch": 0.3042944785276074,
1743
+ "grad_norm": 0.14008916914463043,
1744
+ "learning_rate": 1.8941717791411043e-05,
1745
+ "loss": 0.9480798244476318,
1746
+ "step": 248
1747
+ },
1748
+ {
1749
+ "epoch": 0.30552147239263805,
1750
+ "grad_norm": 0.1293039619922638,
1751
+ "learning_rate": 1.9018404907975462e-05,
1752
+ "loss": 0.992428183555603,
1753
+ "step": 249
1754
+ },
1755
+ {
1756
+ "epoch": 0.3067484662576687,
1757
+ "grad_norm": 0.11473323404788971,
1758
+ "learning_rate": 1.909509202453988e-05,
1759
+ "loss": 1.0926830768585205,
1760
+ "step": 250
1761
+ },
1762
+ {
1763
+ "epoch": 0.3079754601226994,
1764
+ "grad_norm": 0.09950664639472961,
1765
+ "learning_rate": 1.9171779141104296e-05,
1766
+ "loss": 1.2386255264282227,
1767
+ "step": 251
1768
+ },
1769
+ {
1770
+ "epoch": 0.30920245398773005,
1771
+ "grad_norm": 0.10663459450006485,
1772
+ "learning_rate": 1.924846625766871e-05,
1773
+ "loss": 1.137174129486084,
1774
+ "step": 252
1775
+ },
1776
+ {
1777
+ "epoch": 0.3104294478527607,
1778
+ "grad_norm": 0.12015614658594131,
1779
+ "learning_rate": 1.932515337423313e-05,
1780
+ "loss": 1.018200397491455,
1781
+ "step": 253
1782
+ },
1783
+ {
1784
+ "epoch": 0.3116564417177914,
1785
+ "grad_norm": 0.12178294360637665,
1786
+ "learning_rate": 1.9401840490797545e-05,
1787
+ "loss": 1.03646981716156,
1788
+ "step": 254
1789
+ },
1790
+ {
1791
+ "epoch": 0.3128834355828221,
1792
+ "grad_norm": 0.09648360311985016,
1793
+ "learning_rate": 1.9478527607361967e-05,
1794
+ "loss": 1.1479485034942627,
1795
+ "step": 255
1796
+ },
1797
+ {
1798
+ "epoch": 0.3141104294478528,
1799
+ "grad_norm": 0.10974877327680588,
1800
+ "learning_rate": 1.9555214723926382e-05,
1801
+ "loss": 1.1105828285217285,
1802
+ "step": 256
1803
+ },
1804
+ {
1805
+ "epoch": 0.31533742331288345,
1806
+ "grad_norm": 0.13123326003551483,
1807
+ "learning_rate": 1.96319018404908e-05,
1808
+ "loss": 1.0254418849945068,
1809
+ "step": 257
1810
+ },
1811
+ {
1812
+ "epoch": 0.3165644171779141,
1813
+ "grad_norm": 0.12058497965335846,
1814
+ "learning_rate": 1.9708588957055216e-05,
1815
+ "loss": 0.8520700335502625,
1816
+ "step": 258
1817
+ },
1818
+ {
1819
+ "epoch": 0.3177914110429448,
1820
+ "grad_norm": 0.10527869313955307,
1821
+ "learning_rate": 1.978527607361963e-05,
1822
+ "loss": 1.057407021522522,
1823
+ "step": 259
1824
+ },
1825
+ {
1826
+ "epoch": 0.31901840490797545,
1827
+ "grad_norm": 0.12831807136535645,
1828
+ "learning_rate": 1.986196319018405e-05,
1829
+ "loss": 0.9605754017829895,
1830
+ "step": 260
1831
+ },
1832
+ {
1833
+ "epoch": 0.3202453987730061,
1834
+ "grad_norm": 0.10611746460199356,
1835
+ "learning_rate": 1.9938650306748465e-05,
1836
+ "loss": 1.121097445487976,
1837
+ "step": 261
1838
+ },
1839
+ {
1840
+ "epoch": 0.3214723926380368,
1841
+ "grad_norm": 0.10287187248468399,
1842
+ "learning_rate": 2.0015337423312887e-05,
1843
+ "loss": 1.0894691944122314,
1844
+ "step": 262
1845
+ },
1846
+ {
1847
+ "epoch": 0.3226993865030675,
1848
+ "grad_norm": 0.11541418731212616,
1849
+ "learning_rate": 2.0092024539877302e-05,
1850
+ "loss": 0.9634550213813782,
1851
+ "step": 263
1852
+ },
1853
+ {
1854
+ "epoch": 0.3239263803680982,
1855
+ "grad_norm": 0.13788478076457977,
1856
+ "learning_rate": 2.016871165644172e-05,
1857
+ "loss": 1.0188679695129395,
1858
+ "step": 264
1859
+ },
1860
+ {
1861
+ "epoch": 0.32515337423312884,
1862
+ "grad_norm": 0.12389227002859116,
1863
+ "learning_rate": 2.0245398773006136e-05,
1864
+ "loss": 1.0685219764709473,
1865
+ "step": 265
1866
+ },
1867
+ {
1868
+ "epoch": 0.3263803680981595,
1869
+ "grad_norm": 0.1358790248632431,
1870
+ "learning_rate": 2.032208588957055e-05,
1871
+ "loss": 0.9753469228744507,
1872
+ "step": 266
1873
+ },
1874
+ {
1875
+ "epoch": 0.3276073619631902,
1876
+ "grad_norm": 0.1116497740149498,
1877
+ "learning_rate": 2.039877300613497e-05,
1878
+ "loss": 0.957251787185669,
1879
+ "step": 267
1880
+ },
1881
+ {
1882
+ "epoch": 0.32883435582822085,
1883
+ "grad_norm": 0.11138273775577545,
1884
+ "learning_rate": 2.0475460122699388e-05,
1885
+ "loss": 0.9344768524169922,
1886
+ "step": 268
1887
+ },
1888
+ {
1889
+ "epoch": 0.3300613496932515,
1890
+ "grad_norm": 0.1283080279827118,
1891
+ "learning_rate": 2.0552147239263807e-05,
1892
+ "loss": 1.0072486400604248,
1893
+ "step": 269
1894
+ },
1895
+ {
1896
+ "epoch": 0.3312883435582822,
1897
+ "grad_norm": 0.1293177753686905,
1898
+ "learning_rate": 2.0628834355828222e-05,
1899
+ "loss": 0.9404339790344238,
1900
+ "step": 270
1901
+ },
1902
+ {
1903
+ "epoch": 0.3325153374233129,
1904
+ "grad_norm": 0.10337118804454803,
1905
+ "learning_rate": 2.0705521472392637e-05,
1906
+ "loss": 0.9937950372695923,
1907
+ "step": 271
1908
+ },
1909
+ {
1910
+ "epoch": 0.3337423312883436,
1911
+ "grad_norm": 0.10898245871067047,
1912
+ "learning_rate": 2.0782208588957056e-05,
1913
+ "loss": 1.1300337314605713,
1914
+ "step": 272
1915
+ },
1916
+ {
1917
+ "epoch": 0.33496932515337424,
1918
+ "grad_norm": 0.1548905074596405,
1919
+ "learning_rate": 2.085889570552147e-05,
1920
+ "loss": 0.9663505554199219,
1921
+ "step": 273
1922
+ },
1923
+ {
1924
+ "epoch": 0.3361963190184049,
1925
+ "grad_norm": 0.12634064257144928,
1926
+ "learning_rate": 2.0935582822085893e-05,
1927
+ "loss": 0.8915446400642395,
1928
+ "step": 274
1929
+ },
1930
+ {
1931
+ "epoch": 0.3374233128834356,
1932
+ "grad_norm": 0.09657935798168182,
1933
+ "learning_rate": 2.1012269938650308e-05,
1934
+ "loss": 0.9893382787704468,
1935
+ "step": 275
1936
+ },
1937
+ {
1938
+ "epoch": 0.33865030674846625,
1939
+ "grad_norm": 0.17486917972564697,
1940
+ "learning_rate": 2.1088957055214727e-05,
1941
+ "loss": 1.070754051208496,
1942
+ "step": 276
1943
+ },
1944
+ {
1945
+ "epoch": 0.3398773006134969,
1946
+ "grad_norm": 0.14501982927322388,
1947
+ "learning_rate": 2.1165644171779142e-05,
1948
+ "loss": 1.1186362504959106,
1949
+ "step": 277
1950
+ },
1951
+ {
1952
+ "epoch": 0.3411042944785276,
1953
+ "grad_norm": 0.13628800213336945,
1954
+ "learning_rate": 2.1242331288343557e-05,
1955
+ "loss": 0.9450318217277527,
1956
+ "step": 278
1957
+ },
1958
+ {
1959
+ "epoch": 0.3423312883435583,
1960
+ "grad_norm": 0.11076204478740692,
1961
+ "learning_rate": 2.1319018404907976e-05,
1962
+ "loss": 1.005497932434082,
1963
+ "step": 279
1964
+ },
1965
+ {
1966
+ "epoch": 0.34355828220858897,
1967
+ "grad_norm": 0.13485747575759888,
1968
+ "learning_rate": 2.1395705521472395e-05,
1969
+ "loss": 0.895561933517456,
1970
+ "step": 280
1971
+ },
1972
+ {
1973
+ "epoch": 0.34478527607361964,
1974
+ "grad_norm": 0.12088459730148315,
1975
+ "learning_rate": 2.1472392638036813e-05,
1976
+ "loss": 1.3100700378417969,
1977
+ "step": 281
1978
+ },
1979
+ {
1980
+ "epoch": 0.3460122699386503,
1981
+ "grad_norm": 0.11534463614225388,
1982
+ "learning_rate": 2.154907975460123e-05,
1983
+ "loss": 1.11163330078125,
1984
+ "step": 282
1985
+ },
1986
+ {
1987
+ "epoch": 0.347239263803681,
1988
+ "grad_norm": 0.12682971358299255,
1989
+ "learning_rate": 2.1625766871165647e-05,
1990
+ "loss": 1.022303581237793,
1991
+ "step": 283
1992
+ },
1993
+ {
1994
+ "epoch": 0.34846625766871164,
1995
+ "grad_norm": 0.1394464671611786,
1996
+ "learning_rate": 2.1702453987730062e-05,
1997
+ "loss": 1.1607928276062012,
1998
+ "step": 284
1999
+ },
2000
+ {
2001
+ "epoch": 0.3496932515337423,
2002
+ "grad_norm": 0.12041673809289932,
2003
+ "learning_rate": 2.1779141104294477e-05,
2004
+ "loss": 0.9233322739601135,
2005
+ "step": 285
2006
+ },
2007
+ {
2008
+ "epoch": 0.350920245398773,
2009
+ "grad_norm": 0.14270088076591492,
2010
+ "learning_rate": 2.1855828220858896e-05,
2011
+ "loss": 1.100630760192871,
2012
+ "step": 286
2013
+ },
2014
+ {
2015
+ "epoch": 0.3521472392638037,
2016
+ "grad_norm": 0.147483691573143,
2017
+ "learning_rate": 2.1932515337423315e-05,
2018
+ "loss": 1.0364069938659668,
2019
+ "step": 287
2020
+ },
2021
+ {
2022
+ "epoch": 0.35337423312883437,
2023
+ "grad_norm": 0.11873972415924072,
2024
+ "learning_rate": 2.2009202453987733e-05,
2025
+ "loss": 1.0084702968597412,
2026
+ "step": 288
2027
+ },
2028
+ {
2029
+ "epoch": 0.35460122699386504,
2030
+ "grad_norm": 0.1410827338695526,
2031
+ "learning_rate": 2.208588957055215e-05,
2032
+ "loss": 0.9259920120239258,
2033
+ "step": 289
2034
+ },
2035
+ {
2036
+ "epoch": 0.3558282208588957,
2037
+ "grad_norm": 0.12482478469610214,
2038
+ "learning_rate": 2.2162576687116564e-05,
2039
+ "loss": 0.8635483384132385,
2040
+ "step": 290
2041
+ },
2042
+ {
2043
+ "epoch": 0.3570552147239264,
2044
+ "grad_norm": 0.11416301131248474,
2045
+ "learning_rate": 2.2239263803680982e-05,
2046
+ "loss": 1.1208245754241943,
2047
+ "step": 291
2048
+ },
2049
+ {
2050
+ "epoch": 0.35828220858895704,
2051
+ "grad_norm": 0.14102044701576233,
2052
+ "learning_rate": 2.2315950920245397e-05,
2053
+ "loss": 1.0029829740524292,
2054
+ "step": 292
2055
+ },
2056
+ {
2057
+ "epoch": 0.3595092024539877,
2058
+ "grad_norm": 0.25360825657844543,
2059
+ "learning_rate": 2.239263803680982e-05,
2060
+ "loss": 1.0258023738861084,
2061
+ "step": 293
2062
+ },
2063
+ {
2064
+ "epoch": 0.36073619631901843,
2065
+ "grad_norm": 0.13373959064483643,
2066
+ "learning_rate": 2.2469325153374235e-05,
2067
+ "loss": 1.0896992683410645,
2068
+ "step": 294
2069
+ },
2070
+ {
2071
+ "epoch": 0.3619631901840491,
2072
+ "grad_norm": 0.13520075380802155,
2073
+ "learning_rate": 2.2546012269938653e-05,
2074
+ "loss": 1.1489715576171875,
2075
+ "step": 295
2076
+ },
2077
+ {
2078
+ "epoch": 0.36319018404907977,
2079
+ "grad_norm": 0.11745908111333847,
2080
+ "learning_rate": 2.262269938650307e-05,
2081
+ "loss": 1.0147686004638672,
2082
+ "step": 296
2083
+ },
2084
+ {
2085
+ "epoch": 0.36441717791411044,
2086
+ "grad_norm": 0.12096819281578064,
2087
+ "learning_rate": 2.2699386503067484e-05,
2088
+ "loss": 1.2157928943634033,
2089
+ "step": 297
2090
+ },
2091
+ {
2092
+ "epoch": 0.3656441717791411,
2093
+ "grad_norm": 0.12824957072734833,
2094
+ "learning_rate": 2.2776073619631902e-05,
2095
+ "loss": 1.037453532218933,
2096
+ "step": 298
2097
+ },
2098
+ {
2099
+ "epoch": 0.36687116564417177,
2100
+ "grad_norm": 0.13326457142829895,
2101
+ "learning_rate": 2.285276073619632e-05,
2102
+ "loss": 0.9710744023323059,
2103
+ "step": 299
2104
+ },
2105
+ {
2106
+ "epoch": 0.36809815950920244,
2107
+ "grad_norm": 0.14364396035671234,
2108
+ "learning_rate": 2.292944785276074e-05,
2109
+ "loss": 0.9510235786437988,
2110
+ "step": 300
2111
+ }
2112
+ ],
2113
+ "logging_steps": 1,
2114
+ "max_steps": 16300,
2115
+ "num_input_tokens_seen": 0,
2116
+ "num_train_epochs": 20,
2117
+ "save_steps": 300,
2118
+ "stateful_callbacks": {
2119
+ "TrainerControl": {
2120
+ "args": {
2121
+ "should_epoch_stop": false,
2122
+ "should_evaluate": false,
2123
+ "should_log": false,
2124
+ "should_save": true,
2125
+ "should_training_stop": false
2126
+ },
2127
+ "attributes": {}
2128
+ }
2129
+ },
2130
+ "total_flos": 8.39372739969024e+17,
2131
+ "train_batch_size": 8,
2132
+ "trial_name": null,
2133
+ "trial_params": null
2134
+ }
last-checkpoint/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d2da33ebc281d72d4c5e6bb5df446b43b8ebfb23b62285abd0b16424914d26f
3
+ size 5841