Hoseob17 commited on
Commit
80b53e8
·
verified ·
1 Parent(s): c17f862

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: vidore/colpaligemma-3b-pt-448-base
3
+ library_name: peft
4
+ tags:
5
+ - base_model:adapter:vidore/colpaligemma-3b-pt-448-base
6
+ - lora
7
+ - transformers
8
+ ---
9
+
10
+ # Model Card for Model ID
11
+
12
+ <!-- Provide a quick summary of what the model is/does. -->
13
+
14
+
15
+
16
+ ## Model Details
17
+
18
+ ### Model Description
19
+
20
+ <!-- Provide a longer summary of what this model is. -->
21
+
22
+
23
+
24
+ - **Developed by:** [More Information Needed]
25
+ - **Funded by [optional]:** [More Information Needed]
26
+ - **Shared by [optional]:** [More Information Needed]
27
+ - **Model type:** [More Information Needed]
28
+ - **Language(s) (NLP):** [More Information Needed]
29
+ - **License:** [More Information Needed]
30
+ - **Finetuned from model [optional]:** [More Information Needed]
31
+
32
+ ### Model Sources [optional]
33
+
34
+ <!-- Provide the basic links for the model. -->
35
+
36
+ - **Repository:** [More Information Needed]
37
+ - **Paper [optional]:** [More Information Needed]
38
+ - **Demo [optional]:** [More Information Needed]
39
+
40
+ ## Uses
41
+
42
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
+
44
+ ### Direct Use
45
+
46
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
+
48
+ [More Information Needed]
49
+
50
+ ### Downstream Use [optional]
51
+
52
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
+
54
+ [More Information Needed]
55
+
56
+ ### Out-of-Scope Use
57
+
58
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
+
60
+ [More Information Needed]
61
+
62
+ ## Bias, Risks, and Limitations
63
+
64
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
+
66
+ [More Information Needed]
67
+
68
+ ### Recommendations
69
+
70
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
+
72
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
+
74
+ ## How to Get Started with the Model
75
+
76
+ Use the code below to get started with the model.
77
+
78
+ [More Information Needed]
79
+
80
+ ## Training Details
81
+
82
+ ### Training Data
83
+
84
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
+
86
+ [More Information Needed]
87
+
88
+ ### Training Procedure
89
+
90
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
+
92
+ #### Preprocessing [optional]
93
+
94
+ [More Information Needed]
95
+
96
+
97
+ #### Training Hyperparameters
98
+
99
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
+
101
+ #### Speeds, Sizes, Times [optional]
102
+
103
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
+
105
+ [More Information Needed]
106
+
107
+ ## Evaluation
108
+
109
+ <!-- This section describes the evaluation protocols and provides the results. -->
110
+
111
+ ### Testing Data, Factors & Metrics
112
+
113
+ #### Testing Data
114
+
115
+ <!-- This should link to a Dataset Card if possible. -->
116
+
117
+ [More Information Needed]
118
+
119
+ #### Factors
120
+
121
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
+
123
+ [More Information Needed]
124
+
125
+ #### Metrics
126
+
127
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
+
129
+ [More Information Needed]
130
+
131
+ ### Results
132
+
133
+ [More Information Needed]
134
+
135
+ #### Summary
136
+
137
+
138
+
139
+ ## Model Examination [optional]
140
+
141
+ <!-- Relevant interpretability work for the model goes here -->
142
+
143
+ [More Information Needed]
144
+
145
+ ## Environmental Impact
146
+
147
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
+
149
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
+
151
+ - **Hardware Type:** [More Information Needed]
152
+ - **Hours used:** [More Information Needed]
153
+ - **Cloud Provider:** [More Information Needed]
154
+ - **Compute Region:** [More Information Needed]
155
+ - **Carbon Emitted:** [More Information Needed]
156
+
157
+ ## Technical Specifications [optional]
158
+
159
+ ### Model Architecture and Objective
160
+
161
+ [More Information Needed]
162
+
163
+ ### Compute Infrastructure
164
+
165
+ [More Information Needed]
166
+
167
+ #### Hardware
168
+
169
+ [More Information Needed]
170
+
171
+ #### Software
172
+
173
+ [More Information Needed]
174
+
175
+ ## Citation [optional]
176
+
177
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
+
179
+ **BibTeX:**
180
+
181
+ [More Information Needed]
182
+
183
+ **APA:**
184
+
185
+ [More Information Needed]
186
+
187
+ ## Glossary [optional]
188
+
189
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
+
191
+ [More Information Needed]
192
+
193
+ ## More Information [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Authors [optional]
198
+
199
+ [More Information Needed]
200
+
201
+ ## Model Card Contact
202
+
203
+ [More Information Needed]
204
+ ### Framework versions
205
+
206
+ - PEFT 0.16.0
adapter_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "vidore/colpaligemma-3b-pt-448-base",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": "gaussian",
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 32,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.1,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "qalora_group_size": 16,
24
+ "r": 32,
25
+ "rank_pattern": {},
26
+ "revision": null,
27
+ "target_modules": "(.*(language_model).*(down_proj|gate_proj|up_proj|k_proj|q_proj|v_proj|o_proj).*$|.*(custom_text_proj).*$)",
28
+ "task_type": "FEATURE_EXTRACTION",
29
+ "trainable_token_indices": null,
30
+ "use_dora": false,
31
+ "use_qalora": false,
32
+ "use_rslora": false
33
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a7c6d7481d65b1fd7ff420f46584be91513a1fa7a34a43e509ec7f660891fe7
3
+ size 157210936
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8d28abfea36ae0775c9ab2aaf730cedb4d6092c2ecf83dd9c15b946da6abffd
3
+ size 314557835
preprocessor_config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": null,
3
+ "do_normalize": true,
4
+ "do_rescale": true,
5
+ "do_resize": true,
6
+ "image_mean": [
7
+ 0.5,
8
+ 0.5,
9
+ 0.5
10
+ ],
11
+ "image_processor_type": "SiglipImageProcessor",
12
+ "image_seq_length": 1024,
13
+ "image_std": [
14
+ 0.5,
15
+ 0.5,
16
+ 0.5
17
+ ],
18
+ "processor_class": "ColPaliProcessor",
19
+ "resample": 3,
20
+ "rescale_factor": 0.00392156862745098,
21
+ "size": {
22
+ "height": 448,
23
+ "width": 448
24
+ }
25
+ }
rng_state_0.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04222137ef832c18854ecd27bd6a5d0873ff446d3a29d25768aebbe7278ac743
3
+ size 14853
rng_state_1.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b65e71ba7333c2a63f3aad84eecf79ab463a25481ca66c2fd9271f707462992
3
+ size 14853
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bf0c54ec8cafb88588648413211347a9af6f25d8b5fa8b4450af2694caa8b91
3
+ size 1465
special_tokens_map.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<image>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ }
10
+ ],
11
+ "bos_token": {
12
+ "content": "<bos>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "eos_token": {
19
+ "content": "<eos>",
20
+ "lstrip": false,
21
+ "normalized": false,
22
+ "rstrip": false,
23
+ "single_word": false
24
+ },
25
+ "pad_token": {
26
+ "content": "<pad>",
27
+ "lstrip": false,
28
+ "normalized": false,
29
+ "rstrip": false,
30
+ "single_word": false
31
+ },
32
+ "unk_token": {
33
+ "content": "<unk>",
34
+ "lstrip": false,
35
+ "normalized": false,
36
+ "rstrip": false,
37
+ "single_word": false
38
+ }
39
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ff84f53c290d0348c4e206da6094ef781cf8c0e482fec8b268a996b32257cfd
3
+ size 34600975
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
trainer_state.json ADDED
@@ -0,0 +1,1662 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 0.5955603681645912,
6
+ "eval_steps": 200,
7
+ "global_step": 2200,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.0027070925825663237,
14
+ "grad_norm": 6.915613174438477,
15
+ "learning_rate": 4.838709677419355e-06,
16
+ "loss": 2.216,
17
+ "step": 10
18
+ },
19
+ {
20
+ "epoch": 0.005414185165132647,
21
+ "grad_norm": 7.003312587738037,
22
+ "learning_rate": 1.0215053763440861e-05,
23
+ "loss": 2.1383,
24
+ "step": 20
25
+ },
26
+ {
27
+ "epoch": 0.008121277747698972,
28
+ "grad_norm": 4.456768989562988,
29
+ "learning_rate": 1.5591397849462366e-05,
30
+ "loss": 2.1074,
31
+ "step": 30
32
+ },
33
+ {
34
+ "epoch": 0.010828370330265295,
35
+ "grad_norm": 6.111919403076172,
36
+ "learning_rate": 2.0967741935483873e-05,
37
+ "loss": 1.8664,
38
+ "step": 40
39
+ },
40
+ {
41
+ "epoch": 0.01353546291283162,
42
+ "grad_norm": 8.282320022583008,
43
+ "learning_rate": 2.6344086021505376e-05,
44
+ "loss": 1.4276,
45
+ "step": 50
46
+ },
47
+ {
48
+ "epoch": 0.016242555495397944,
49
+ "grad_norm": 7.404280662536621,
50
+ "learning_rate": 3.172043010752688e-05,
51
+ "loss": 0.8648,
52
+ "step": 60
53
+ },
54
+ {
55
+ "epoch": 0.018949648077964266,
56
+ "grad_norm": 16.722442626953125,
57
+ "learning_rate": 3.7096774193548386e-05,
58
+ "loss": 0.6635,
59
+ "step": 70
60
+ },
61
+ {
62
+ "epoch": 0.02165674066053059,
63
+ "grad_norm": 9.628230094909668,
64
+ "learning_rate": 4.247311827956989e-05,
65
+ "loss": 0.5382,
66
+ "step": 80
67
+ },
68
+ {
69
+ "epoch": 0.024363833243096916,
70
+ "grad_norm": 5.784783840179443,
71
+ "learning_rate": 4.78494623655914e-05,
72
+ "loss": 0.4909,
73
+ "step": 90
74
+ },
75
+ {
76
+ "epoch": 0.02707092582566324,
77
+ "grad_norm": 9.453276634216309,
78
+ "learning_rate": 4.991668980838656e-05,
79
+ "loss": 0.3097,
80
+ "step": 100
81
+ },
82
+ {
83
+ "epoch": 0.02977801840822956,
84
+ "grad_norm": 6.05251932144165,
85
+ "learning_rate": 4.977783948903082e-05,
86
+ "loss": 0.2938,
87
+ "step": 110
88
+ },
89
+ {
90
+ "epoch": 0.03248511099079589,
91
+ "grad_norm": 2.7414932250976562,
92
+ "learning_rate": 4.963898916967509e-05,
93
+ "loss": 0.433,
94
+ "step": 120
95
+ },
96
+ {
97
+ "epoch": 0.03519220357336221,
98
+ "grad_norm": 9.005271911621094,
99
+ "learning_rate": 4.950013885031936e-05,
100
+ "loss": 0.4313,
101
+ "step": 130
102
+ },
103
+ {
104
+ "epoch": 0.03789929615592853,
105
+ "grad_norm": 7.538389682769775,
106
+ "learning_rate": 4.9361288530963625e-05,
107
+ "loss": 0.3519,
108
+ "step": 140
109
+ },
110
+ {
111
+ "epoch": 0.040606388738494856,
112
+ "grad_norm": 6.26092529296875,
113
+ "learning_rate": 4.9222438211607894e-05,
114
+ "loss": 0.3581,
115
+ "step": 150
116
+ },
117
+ {
118
+ "epoch": 0.04331348132106118,
119
+ "grad_norm": 7.152598857879639,
120
+ "learning_rate": 4.908358789225215e-05,
121
+ "loss": 0.3238,
122
+ "step": 160
123
+ },
124
+ {
125
+ "epoch": 0.0460205739036275,
126
+ "grad_norm": 6.930057525634766,
127
+ "learning_rate": 4.894473757289642e-05,
128
+ "loss": 0.3203,
129
+ "step": 170
130
+ },
131
+ {
132
+ "epoch": 0.04872766648619383,
133
+ "grad_norm": 6.083557605743408,
134
+ "learning_rate": 4.880588725354068e-05,
135
+ "loss": 0.2912,
136
+ "step": 180
137
+ },
138
+ {
139
+ "epoch": 0.051434759068760154,
140
+ "grad_norm": 5.25084114074707,
141
+ "learning_rate": 4.866703693418495e-05,
142
+ "loss": 0.3177,
143
+ "step": 190
144
+ },
145
+ {
146
+ "epoch": 0.05414185165132648,
147
+ "grad_norm": 5.197162628173828,
148
+ "learning_rate": 4.852818661482922e-05,
149
+ "loss": 0.217,
150
+ "step": 200
151
+ },
152
+ {
153
+ "epoch": 0.05414185165132648,
154
+ "eval_loss": 0.21044665575027466,
155
+ "eval_runtime": 38.4472,
156
+ "eval_samples_per_second": 13.005,
157
+ "eval_steps_per_second": 0.832,
158
+ "step": 200
159
+ },
160
+ {
161
+ "epoch": 0.0568489442338928,
162
+ "grad_norm": 6.816867828369141,
163
+ "learning_rate": 4.8389336295473484e-05,
164
+ "loss": 0.3546,
165
+ "step": 210
166
+ },
167
+ {
168
+ "epoch": 0.05955603681645912,
169
+ "grad_norm": 5.185015678405762,
170
+ "learning_rate": 4.825048597611775e-05,
171
+ "loss": 0.2319,
172
+ "step": 220
173
+ },
174
+ {
175
+ "epoch": 0.062263129399025445,
176
+ "grad_norm": 3.9528026580810547,
177
+ "learning_rate": 4.811163565676201e-05,
178
+ "loss": 0.2197,
179
+ "step": 230
180
+ },
181
+ {
182
+ "epoch": 0.06497022198159177,
183
+ "grad_norm": 6.212977409362793,
184
+ "learning_rate": 4.797278533740628e-05,
185
+ "loss": 0.226,
186
+ "step": 240
187
+ },
188
+ {
189
+ "epoch": 0.0676773145641581,
190
+ "grad_norm": 2.5513417720794678,
191
+ "learning_rate": 4.783393501805055e-05,
192
+ "loss": 0.1603,
193
+ "step": 250
194
+ },
195
+ {
196
+ "epoch": 0.07038440714672442,
197
+ "grad_norm": 4.768312931060791,
198
+ "learning_rate": 4.769508469869481e-05,
199
+ "loss": 0.2303,
200
+ "step": 260
201
+ },
202
+ {
203
+ "epoch": 0.07309149972929074,
204
+ "grad_norm": 4.091211795806885,
205
+ "learning_rate": 4.7556234379339074e-05,
206
+ "loss": 0.2353,
207
+ "step": 270
208
+ },
209
+ {
210
+ "epoch": 0.07579859231185707,
211
+ "grad_norm": 3.9319090843200684,
212
+ "learning_rate": 4.741738405998334e-05,
213
+ "loss": 0.1893,
214
+ "step": 280
215
+ },
216
+ {
217
+ "epoch": 0.07850568489442339,
218
+ "grad_norm": 2.9893264770507812,
219
+ "learning_rate": 4.7278533740627606e-05,
220
+ "loss": 0.0983,
221
+ "step": 290
222
+ },
223
+ {
224
+ "epoch": 0.08121277747698971,
225
+ "grad_norm": 4.845495700836182,
226
+ "learning_rate": 4.713968342127187e-05,
227
+ "loss": 0.2001,
228
+ "step": 300
229
+ },
230
+ {
231
+ "epoch": 0.08391987005955603,
232
+ "grad_norm": 6.0109968185424805,
233
+ "learning_rate": 4.700083310191614e-05,
234
+ "loss": 0.2549,
235
+ "step": 310
236
+ },
237
+ {
238
+ "epoch": 0.08662696264212236,
239
+ "grad_norm": 3.424335241317749,
240
+ "learning_rate": 4.68619827825604e-05,
241
+ "loss": 0.1977,
242
+ "step": 320
243
+ },
244
+ {
245
+ "epoch": 0.08933405522468868,
246
+ "grad_norm": 2.177932024002075,
247
+ "learning_rate": 4.6723132463204664e-05,
248
+ "loss": 0.1989,
249
+ "step": 330
250
+ },
251
+ {
252
+ "epoch": 0.092041147807255,
253
+ "grad_norm": 4.684967517852783,
254
+ "learning_rate": 4.6584282143848933e-05,
255
+ "loss": 0.1908,
256
+ "step": 340
257
+ },
258
+ {
259
+ "epoch": 0.09474824038982133,
260
+ "grad_norm": 5.848077774047852,
261
+ "learning_rate": 4.6445431824493196e-05,
262
+ "loss": 0.2058,
263
+ "step": 350
264
+ },
265
+ {
266
+ "epoch": 0.09745533297238766,
267
+ "grad_norm": 5.5079264640808105,
268
+ "learning_rate": 4.6306581505137466e-05,
269
+ "loss": 0.2388,
270
+ "step": 360
271
+ },
272
+ {
273
+ "epoch": 0.10016242555495398,
274
+ "grad_norm": 0.791803777217865,
275
+ "learning_rate": 4.616773118578173e-05,
276
+ "loss": 0.2027,
277
+ "step": 370
278
+ },
279
+ {
280
+ "epoch": 0.10286951813752031,
281
+ "grad_norm": 3.581902503967285,
282
+ "learning_rate": 4.602888086642599e-05,
283
+ "loss": 0.144,
284
+ "step": 380
285
+ },
286
+ {
287
+ "epoch": 0.10557661072008663,
288
+ "grad_norm": 1.2181485891342163,
289
+ "learning_rate": 4.589003054707026e-05,
290
+ "loss": 0.1063,
291
+ "step": 390
292
+ },
293
+ {
294
+ "epoch": 0.10828370330265295,
295
+ "grad_norm": 4.920592308044434,
296
+ "learning_rate": 4.5751180227714523e-05,
297
+ "loss": 0.2288,
298
+ "step": 400
299
+ },
300
+ {
301
+ "epoch": 0.10828370330265295,
302
+ "eval_loss": 0.13153080642223358,
303
+ "eval_runtime": 39.8777,
304
+ "eval_samples_per_second": 12.538,
305
+ "eval_steps_per_second": 0.802,
306
+ "step": 400
307
+ },
308
+ {
309
+ "epoch": 0.11099079588521928,
310
+ "grad_norm": 1.773049235343933,
311
+ "learning_rate": 4.561232990835879e-05,
312
+ "loss": 0.1495,
313
+ "step": 410
314
+ },
315
+ {
316
+ "epoch": 0.1136978884677856,
317
+ "grad_norm": 2.75878643989563,
318
+ "learning_rate": 4.547347958900306e-05,
319
+ "loss": 0.1702,
320
+ "step": 420
321
+ },
322
+ {
323
+ "epoch": 0.11640498105035192,
324
+ "grad_norm": 1.618165135383606,
325
+ "learning_rate": 4.533462926964732e-05,
326
+ "loss": 0.1347,
327
+ "step": 430
328
+ },
329
+ {
330
+ "epoch": 0.11911207363291824,
331
+ "grad_norm": 2.071643114089966,
332
+ "learning_rate": 4.519577895029159e-05,
333
+ "loss": 0.1637,
334
+ "step": 440
335
+ },
336
+ {
337
+ "epoch": 0.12181916621548457,
338
+ "grad_norm": 2.7518310546875,
339
+ "learning_rate": 4.505692863093585e-05,
340
+ "loss": 0.1245,
341
+ "step": 450
342
+ },
343
+ {
344
+ "epoch": 0.12452625879805089,
345
+ "grad_norm": 6.924583435058594,
346
+ "learning_rate": 4.491807831158012e-05,
347
+ "loss": 0.0917,
348
+ "step": 460
349
+ },
350
+ {
351
+ "epoch": 0.12723335138061723,
352
+ "grad_norm": 1.4702112674713135,
353
+ "learning_rate": 4.477922799222438e-05,
354
+ "loss": 0.153,
355
+ "step": 470
356
+ },
357
+ {
358
+ "epoch": 0.12994044396318355,
359
+ "grad_norm": 1.4525381326675415,
360
+ "learning_rate": 4.464037767286865e-05,
361
+ "loss": 0.1356,
362
+ "step": 480
363
+ },
364
+ {
365
+ "epoch": 0.13264753654574987,
366
+ "grad_norm": 1.5836750268936157,
367
+ "learning_rate": 4.4501527353512915e-05,
368
+ "loss": 0.1632,
369
+ "step": 490
370
+ },
371
+ {
372
+ "epoch": 0.1353546291283162,
373
+ "grad_norm": 3.392880439758301,
374
+ "learning_rate": 4.436267703415718e-05,
375
+ "loss": 0.2004,
376
+ "step": 500
377
+ },
378
+ {
379
+ "epoch": 0.13806172171088252,
380
+ "grad_norm": 1.7115120887756348,
381
+ "learning_rate": 4.422382671480145e-05,
382
+ "loss": 0.1089,
383
+ "step": 510
384
+ },
385
+ {
386
+ "epoch": 0.14076881429344884,
387
+ "grad_norm": 5.1935811042785645,
388
+ "learning_rate": 4.408497639544571e-05,
389
+ "loss": 0.1627,
390
+ "step": 520
391
+ },
392
+ {
393
+ "epoch": 0.14347590687601516,
394
+ "grad_norm": 1.8821942806243896,
395
+ "learning_rate": 4.394612607608998e-05,
396
+ "loss": 0.1658,
397
+ "step": 530
398
+ },
399
+ {
400
+ "epoch": 0.1461829994585815,
401
+ "grad_norm": 7.369962692260742,
402
+ "learning_rate": 4.380727575673424e-05,
403
+ "loss": 0.129,
404
+ "step": 540
405
+ },
406
+ {
407
+ "epoch": 0.1488900920411478,
408
+ "grad_norm": 6.980068683624268,
409
+ "learning_rate": 4.3668425437378505e-05,
410
+ "loss": 0.1077,
411
+ "step": 550
412
+ },
413
+ {
414
+ "epoch": 0.15159718462371413,
415
+ "grad_norm": 3.1524200439453125,
416
+ "learning_rate": 4.3529575118022775e-05,
417
+ "loss": 0.1809,
418
+ "step": 560
419
+ },
420
+ {
421
+ "epoch": 0.15430427720628045,
422
+ "grad_norm": 1.2788898944854736,
423
+ "learning_rate": 4.339072479866704e-05,
424
+ "loss": 0.1159,
425
+ "step": 570
426
+ },
427
+ {
428
+ "epoch": 0.15701136978884678,
429
+ "grad_norm": 3.4042916297912598,
430
+ "learning_rate": 4.325187447931131e-05,
431
+ "loss": 0.1648,
432
+ "step": 580
433
+ },
434
+ {
435
+ "epoch": 0.1597184623714131,
436
+ "grad_norm": 1.9807442426681519,
437
+ "learning_rate": 4.311302415995557e-05,
438
+ "loss": 0.1419,
439
+ "step": 590
440
+ },
441
+ {
442
+ "epoch": 0.16242555495397942,
443
+ "grad_norm": 2.910810708999634,
444
+ "learning_rate": 4.297417384059983e-05,
445
+ "loss": 0.1224,
446
+ "step": 600
447
+ },
448
+ {
449
+ "epoch": 0.16242555495397942,
450
+ "eval_loss": 0.12276403605937958,
451
+ "eval_runtime": 41.3732,
452
+ "eval_samples_per_second": 12.085,
453
+ "eval_steps_per_second": 0.773,
454
+ "step": 600
455
+ },
456
+ {
457
+ "epoch": 0.16513264753654575,
458
+ "grad_norm": 3.0510406494140625,
459
+ "learning_rate": 4.28353235212441e-05,
460
+ "loss": 0.1388,
461
+ "step": 610
462
+ },
463
+ {
464
+ "epoch": 0.16783974011911207,
465
+ "grad_norm": 3.4809114933013916,
466
+ "learning_rate": 4.2696473201888365e-05,
467
+ "loss": 0.1767,
468
+ "step": 620
469
+ },
470
+ {
471
+ "epoch": 0.1705468327016784,
472
+ "grad_norm": 0.8834869265556335,
473
+ "learning_rate": 4.2557622882532634e-05,
474
+ "loss": 0.1368,
475
+ "step": 630
476
+ },
477
+ {
478
+ "epoch": 0.17325392528424471,
479
+ "grad_norm": 4.756887435913086,
480
+ "learning_rate": 4.24187725631769e-05,
481
+ "loss": 0.1456,
482
+ "step": 640
483
+ },
484
+ {
485
+ "epoch": 0.17596101786681104,
486
+ "grad_norm": 4.665611743927002,
487
+ "learning_rate": 4.227992224382116e-05,
488
+ "loss": 0.1507,
489
+ "step": 650
490
+ },
491
+ {
492
+ "epoch": 0.17866811044937736,
493
+ "grad_norm": 4.004549503326416,
494
+ "learning_rate": 4.214107192446543e-05,
495
+ "loss": 0.1079,
496
+ "step": 660
497
+ },
498
+ {
499
+ "epoch": 0.18137520303194368,
500
+ "grad_norm": 5.584454536437988,
501
+ "learning_rate": 4.200222160510969e-05,
502
+ "loss": 0.1116,
503
+ "step": 670
504
+ },
505
+ {
506
+ "epoch": 0.18408229561451,
507
+ "grad_norm": 6.875036716461182,
508
+ "learning_rate": 4.186337128575396e-05,
509
+ "loss": 0.1949,
510
+ "step": 680
511
+ },
512
+ {
513
+ "epoch": 0.18678938819707633,
514
+ "grad_norm": 3.8243751525878906,
515
+ "learning_rate": 4.1724520966398224e-05,
516
+ "loss": 0.1759,
517
+ "step": 690
518
+ },
519
+ {
520
+ "epoch": 0.18949648077964265,
521
+ "grad_norm": 0.9678667783737183,
522
+ "learning_rate": 4.1585670647042494e-05,
523
+ "loss": 0.1308,
524
+ "step": 700
525
+ },
526
+ {
527
+ "epoch": 0.19220357336220897,
528
+ "grad_norm": 3.1482715606689453,
529
+ "learning_rate": 4.1446820327686756e-05,
530
+ "loss": 0.1755,
531
+ "step": 710
532
+ },
533
+ {
534
+ "epoch": 0.19491066594477532,
535
+ "grad_norm": 1.77426016330719,
536
+ "learning_rate": 4.130797000833102e-05,
537
+ "loss": 0.1097,
538
+ "step": 720
539
+ },
540
+ {
541
+ "epoch": 0.19761775852734165,
542
+ "grad_norm": 3.6544015407562256,
543
+ "learning_rate": 4.116911968897529e-05,
544
+ "loss": 0.1258,
545
+ "step": 730
546
+ },
547
+ {
548
+ "epoch": 0.20032485110990797,
549
+ "grad_norm": 0.3499806225299835,
550
+ "learning_rate": 4.103026936961955e-05,
551
+ "loss": 0.1203,
552
+ "step": 740
553
+ },
554
+ {
555
+ "epoch": 0.2030319436924743,
556
+ "grad_norm": 1.8539406061172485,
557
+ "learning_rate": 4.089141905026382e-05,
558
+ "loss": 0.1,
559
+ "step": 750
560
+ },
561
+ {
562
+ "epoch": 0.20573903627504062,
563
+ "grad_norm": 3.3867313861846924,
564
+ "learning_rate": 4.0752568730908084e-05,
565
+ "loss": 0.1537,
566
+ "step": 760
567
+ },
568
+ {
569
+ "epoch": 0.20844612885760694,
570
+ "grad_norm": 1.560181975364685,
571
+ "learning_rate": 4.0613718411552346e-05,
572
+ "loss": 0.1085,
573
+ "step": 770
574
+ },
575
+ {
576
+ "epoch": 0.21115322144017326,
577
+ "grad_norm": 4.343753337860107,
578
+ "learning_rate": 4.0474868092196616e-05,
579
+ "loss": 0.0756,
580
+ "step": 780
581
+ },
582
+ {
583
+ "epoch": 0.21386031402273958,
584
+ "grad_norm": 3.629037857055664,
585
+ "learning_rate": 4.033601777284088e-05,
586
+ "loss": 0.1207,
587
+ "step": 790
588
+ },
589
+ {
590
+ "epoch": 0.2165674066053059,
591
+ "grad_norm": 2.0169517993927,
592
+ "learning_rate": 4.019716745348515e-05,
593
+ "loss": 0.0913,
594
+ "step": 800
595
+ },
596
+ {
597
+ "epoch": 0.2165674066053059,
598
+ "eval_loss": 0.11925679445266724,
599
+ "eval_runtime": 43.8415,
600
+ "eval_samples_per_second": 11.405,
601
+ "eval_steps_per_second": 0.73,
602
+ "step": 800
603
+ },
604
+ {
605
+ "epoch": 0.21927449918787223,
606
+ "grad_norm": 1.212438941001892,
607
+ "learning_rate": 4.005831713412941e-05,
608
+ "loss": 0.0968,
609
+ "step": 810
610
+ },
611
+ {
612
+ "epoch": 0.22198159177043855,
613
+ "grad_norm": 2.369439125061035,
614
+ "learning_rate": 3.9919466814773674e-05,
615
+ "loss": 0.1065,
616
+ "step": 820
617
+ },
618
+ {
619
+ "epoch": 0.22468868435300487,
620
+ "grad_norm": 2.005507469177246,
621
+ "learning_rate": 3.978061649541794e-05,
622
+ "loss": 0.1262,
623
+ "step": 830
624
+ },
625
+ {
626
+ "epoch": 0.2273957769355712,
627
+ "grad_norm": 7.557380676269531,
628
+ "learning_rate": 3.9641766176062206e-05,
629
+ "loss": 0.0766,
630
+ "step": 840
631
+ },
632
+ {
633
+ "epoch": 0.23010286951813752,
634
+ "grad_norm": 2.7166643142700195,
635
+ "learning_rate": 3.9502915856706475e-05,
636
+ "loss": 0.1746,
637
+ "step": 850
638
+ },
639
+ {
640
+ "epoch": 0.23280996210070384,
641
+ "grad_norm": 3.17519211769104,
642
+ "learning_rate": 3.936406553735074e-05,
643
+ "loss": 0.1201,
644
+ "step": 860
645
+ },
646
+ {
647
+ "epoch": 0.23551705468327017,
648
+ "grad_norm": 2.7503132820129395,
649
+ "learning_rate": 3.9225215217995e-05,
650
+ "loss": 0.1404,
651
+ "step": 870
652
+ },
653
+ {
654
+ "epoch": 0.2382241472658365,
655
+ "grad_norm": 2.0287389755249023,
656
+ "learning_rate": 3.9086364898639264e-05,
657
+ "loss": 0.0795,
658
+ "step": 880
659
+ },
660
+ {
661
+ "epoch": 0.2409312398484028,
662
+ "grad_norm": 2.2139947414398193,
663
+ "learning_rate": 3.894751457928353e-05,
664
+ "loss": 0.1682,
665
+ "step": 890
666
+ },
667
+ {
668
+ "epoch": 0.24363833243096913,
669
+ "grad_norm": 0.06431946903467178,
670
+ "learning_rate": 3.88086642599278e-05,
671
+ "loss": 0.0632,
672
+ "step": 900
673
+ },
674
+ {
675
+ "epoch": 0.24634542501353546,
676
+ "grad_norm": 3.912949323654175,
677
+ "learning_rate": 3.8669813940572065e-05,
678
+ "loss": 0.1254,
679
+ "step": 910
680
+ },
681
+ {
682
+ "epoch": 0.24905251759610178,
683
+ "grad_norm": 6.618512153625488,
684
+ "learning_rate": 3.8530963621216335e-05,
685
+ "loss": 0.1486,
686
+ "step": 920
687
+ },
688
+ {
689
+ "epoch": 0.2517596101786681,
690
+ "grad_norm": 0.78745436668396,
691
+ "learning_rate": 3.839211330186059e-05,
692
+ "loss": 0.127,
693
+ "step": 930
694
+ },
695
+ {
696
+ "epoch": 0.25446670276123445,
697
+ "grad_norm": 3.2422189712524414,
698
+ "learning_rate": 3.825326298250486e-05,
699
+ "loss": 0.1098,
700
+ "step": 940
701
+ },
702
+ {
703
+ "epoch": 0.25717379534380075,
704
+ "grad_norm": 2.6565208435058594,
705
+ "learning_rate": 3.811441266314913e-05,
706
+ "loss": 0.0979,
707
+ "step": 950
708
+ },
709
+ {
710
+ "epoch": 0.2598808879263671,
711
+ "grad_norm": 0.9006327390670776,
712
+ "learning_rate": 3.797556234379339e-05,
713
+ "loss": 0.0908,
714
+ "step": 960
715
+ },
716
+ {
717
+ "epoch": 0.2625879805089334,
718
+ "grad_norm": 2.5286571979522705,
719
+ "learning_rate": 3.783671202443766e-05,
720
+ "loss": 0.1006,
721
+ "step": 970
722
+ },
723
+ {
724
+ "epoch": 0.26529507309149974,
725
+ "grad_norm": 3.4348323345184326,
726
+ "learning_rate": 3.7697861705081925e-05,
727
+ "loss": 0.1031,
728
+ "step": 980
729
+ },
730
+ {
731
+ "epoch": 0.26800216567406604,
732
+ "grad_norm": 4.574017524719238,
733
+ "learning_rate": 3.755901138572619e-05,
734
+ "loss": 0.1484,
735
+ "step": 990
736
+ },
737
+ {
738
+ "epoch": 0.2707092582566324,
739
+ "grad_norm": 1.7903037071228027,
740
+ "learning_rate": 3.742016106637046e-05,
741
+ "loss": 0.0964,
742
+ "step": 1000
743
+ },
744
+ {
745
+ "epoch": 0.2707092582566324,
746
+ "eval_loss": 0.12172573804855347,
747
+ "eval_runtime": 42.6382,
748
+ "eval_samples_per_second": 11.727,
749
+ "eval_steps_per_second": 0.75,
750
+ "step": 1000
751
+ },
752
+ {
753
+ "epoch": 0.2734163508391987,
754
+ "grad_norm": 4.126545429229736,
755
+ "learning_rate": 3.728131074701472e-05,
756
+ "loss": 0.1381,
757
+ "step": 1010
758
+ },
759
+ {
760
+ "epoch": 0.27612344342176504,
761
+ "grad_norm": 2.3475353717803955,
762
+ "learning_rate": 3.714246042765899e-05,
763
+ "loss": 0.1235,
764
+ "step": 1020
765
+ },
766
+ {
767
+ "epoch": 0.27883053600433133,
768
+ "grad_norm": 1.137242078781128,
769
+ "learning_rate": 3.700361010830325e-05,
770
+ "loss": 0.1033,
771
+ "step": 1030
772
+ },
773
+ {
774
+ "epoch": 0.2815376285868977,
775
+ "grad_norm": 2.257161855697632,
776
+ "learning_rate": 3.6864759788947515e-05,
777
+ "loss": 0.1302,
778
+ "step": 1040
779
+ },
780
+ {
781
+ "epoch": 0.284244721169464,
782
+ "grad_norm": 8.024300575256348,
783
+ "learning_rate": 3.672590946959178e-05,
784
+ "loss": 0.1123,
785
+ "step": 1050
786
+ },
787
+ {
788
+ "epoch": 0.2869518137520303,
789
+ "grad_norm": 2.8162035942077637,
790
+ "learning_rate": 3.658705915023605e-05,
791
+ "loss": 0.1335,
792
+ "step": 1060
793
+ },
794
+ {
795
+ "epoch": 0.2896589063345966,
796
+ "grad_norm": 3.007559299468994,
797
+ "learning_rate": 3.6448208830880317e-05,
798
+ "loss": 0.1021,
799
+ "step": 1070
800
+ },
801
+ {
802
+ "epoch": 0.292365998917163,
803
+ "grad_norm": 3.705847978591919,
804
+ "learning_rate": 3.630935851152458e-05,
805
+ "loss": 0.1044,
806
+ "step": 1080
807
+ },
808
+ {
809
+ "epoch": 0.29507309149972927,
810
+ "grad_norm": 3.605656623840332,
811
+ "learning_rate": 3.617050819216884e-05,
812
+ "loss": 0.1219,
813
+ "step": 1090
814
+ },
815
+ {
816
+ "epoch": 0.2977801840822956,
817
+ "grad_norm": 2.14206600189209,
818
+ "learning_rate": 3.6031657872813105e-05,
819
+ "loss": 0.1599,
820
+ "step": 1100
821
+ },
822
+ {
823
+ "epoch": 0.3004872766648619,
824
+ "grad_norm": 2.947873830795288,
825
+ "learning_rate": 3.5892807553457374e-05,
826
+ "loss": 0.1157,
827
+ "step": 1110
828
+ },
829
+ {
830
+ "epoch": 0.30319436924742826,
831
+ "grad_norm": 3.078336715698242,
832
+ "learning_rate": 3.5753957234101644e-05,
833
+ "loss": 0.1188,
834
+ "step": 1120
835
+ },
836
+ {
837
+ "epoch": 0.30590146182999456,
838
+ "grad_norm": 0.26743820309638977,
839
+ "learning_rate": 3.5615106914745907e-05,
840
+ "loss": 0.1042,
841
+ "step": 1130
842
+ },
843
+ {
844
+ "epoch": 0.3086085544125609,
845
+ "grad_norm": 3.805241823196411,
846
+ "learning_rate": 3.547625659539017e-05,
847
+ "loss": 0.1159,
848
+ "step": 1140
849
+ },
850
+ {
851
+ "epoch": 0.31131564699512726,
852
+ "grad_norm": 5.69476318359375,
853
+ "learning_rate": 3.533740627603443e-05,
854
+ "loss": 0.1115,
855
+ "step": 1150
856
+ },
857
+ {
858
+ "epoch": 0.31402273957769355,
859
+ "grad_norm": 1.543199896812439,
860
+ "learning_rate": 3.51985559566787e-05,
861
+ "loss": 0.1228,
862
+ "step": 1160
863
+ },
864
+ {
865
+ "epoch": 0.3167298321602599,
866
+ "grad_norm": 6.506993293762207,
867
+ "learning_rate": 3.5059705637322964e-05,
868
+ "loss": 0.1327,
869
+ "step": 1170
870
+ },
871
+ {
872
+ "epoch": 0.3194369247428262,
873
+ "grad_norm": 4.64252233505249,
874
+ "learning_rate": 3.4920855317967234e-05,
875
+ "loss": 0.1452,
876
+ "step": 1180
877
+ },
878
+ {
879
+ "epoch": 0.32214401732539255,
880
+ "grad_norm": 1.6929792165756226,
881
+ "learning_rate": 3.47820049986115e-05,
882
+ "loss": 0.0785,
883
+ "step": 1190
884
+ },
885
+ {
886
+ "epoch": 0.32485110990795885,
887
+ "grad_norm": 4.854442119598389,
888
+ "learning_rate": 3.464315467925576e-05,
889
+ "loss": 0.1319,
890
+ "step": 1200
891
+ },
892
+ {
893
+ "epoch": 0.32485110990795885,
894
+ "eval_loss": 0.09344498813152313,
895
+ "eval_runtime": 42.1424,
896
+ "eval_samples_per_second": 11.865,
897
+ "eval_steps_per_second": 0.759,
898
+ "step": 1200
899
+ },
900
+ {
901
+ "epoch": 0.3275582024905252,
902
+ "grad_norm": 5.581364631652832,
903
+ "learning_rate": 3.450430435990003e-05,
904
+ "loss": 0.1049,
905
+ "step": 1210
906
+ },
907
+ {
908
+ "epoch": 0.3302652950730915,
909
+ "grad_norm": 2.954586982727051,
910
+ "learning_rate": 3.436545404054429e-05,
911
+ "loss": 0.0919,
912
+ "step": 1220
913
+ },
914
+ {
915
+ "epoch": 0.33297238765565784,
916
+ "grad_norm": 5.633950233459473,
917
+ "learning_rate": 3.422660372118856e-05,
918
+ "loss": 0.1622,
919
+ "step": 1230
920
+ },
921
+ {
922
+ "epoch": 0.33567948023822414,
923
+ "grad_norm": 1.8235430717468262,
924
+ "learning_rate": 3.408775340183283e-05,
925
+ "loss": 0.1313,
926
+ "step": 1240
927
+ },
928
+ {
929
+ "epoch": 0.3383865728207905,
930
+ "grad_norm": 0.5474218726158142,
931
+ "learning_rate": 3.394890308247709e-05,
932
+ "loss": 0.0547,
933
+ "step": 1250
934
+ },
935
+ {
936
+ "epoch": 0.3410936654033568,
937
+ "grad_norm": 1.9853441715240479,
938
+ "learning_rate": 3.3810052763121356e-05,
939
+ "loss": 0.1126,
940
+ "step": 1260
941
+ },
942
+ {
943
+ "epoch": 0.34380075798592313,
944
+ "grad_norm": 4.861907482147217,
945
+ "learning_rate": 3.367120244376562e-05,
946
+ "loss": 0.1532,
947
+ "step": 1270
948
+ },
949
+ {
950
+ "epoch": 0.34650785056848943,
951
+ "grad_norm": 4.035737991333008,
952
+ "learning_rate": 3.353235212440989e-05,
953
+ "loss": 0.1483,
954
+ "step": 1280
955
+ },
956
+ {
957
+ "epoch": 0.3492149431510558,
958
+ "grad_norm": 2.4256069660186768,
959
+ "learning_rate": 3.339350180505416e-05,
960
+ "loss": 0.116,
961
+ "step": 1290
962
+ },
963
+ {
964
+ "epoch": 0.3519220357336221,
965
+ "grad_norm": 4.798511505126953,
966
+ "learning_rate": 3.325465148569842e-05,
967
+ "loss": 0.1269,
968
+ "step": 1300
969
+ },
970
+ {
971
+ "epoch": 0.3546291283161884,
972
+ "grad_norm": 1.485160231590271,
973
+ "learning_rate": 3.311580116634268e-05,
974
+ "loss": 0.0906,
975
+ "step": 1310
976
+ },
977
+ {
978
+ "epoch": 0.3573362208987547,
979
+ "grad_norm": 2.424042224884033,
980
+ "learning_rate": 3.2976950846986946e-05,
981
+ "loss": 0.1337,
982
+ "step": 1320
983
+ },
984
+ {
985
+ "epoch": 0.36004331348132107,
986
+ "grad_norm": 3.985996723175049,
987
+ "learning_rate": 3.2838100527631216e-05,
988
+ "loss": 0.2552,
989
+ "step": 1330
990
+ },
991
+ {
992
+ "epoch": 0.36275040606388737,
993
+ "grad_norm": 1.6792798042297363,
994
+ "learning_rate": 3.269925020827548e-05,
995
+ "loss": 0.1659,
996
+ "step": 1340
997
+ },
998
+ {
999
+ "epoch": 0.3654574986464537,
1000
+ "grad_norm": 1.6179767847061157,
1001
+ "learning_rate": 3.256039988891975e-05,
1002
+ "loss": 0.102,
1003
+ "step": 1350
1004
+ },
1005
+ {
1006
+ "epoch": 0.36816459122902,
1007
+ "grad_norm": 1.6773470640182495,
1008
+ "learning_rate": 3.242154956956401e-05,
1009
+ "loss": 0.0605,
1010
+ "step": 1360
1011
+ },
1012
+ {
1013
+ "epoch": 0.37087168381158636,
1014
+ "grad_norm": 0.5299155116081238,
1015
+ "learning_rate": 3.228269925020827e-05,
1016
+ "loss": 0.1454,
1017
+ "step": 1370
1018
+ },
1019
+ {
1020
+ "epoch": 0.37357877639415266,
1021
+ "grad_norm": 2.9945433139801025,
1022
+ "learning_rate": 3.214384893085254e-05,
1023
+ "loss": 0.1168,
1024
+ "step": 1380
1025
+ },
1026
+ {
1027
+ "epoch": 0.376285868976719,
1028
+ "grad_norm": 2.2801761627197266,
1029
+ "learning_rate": 3.2004998611496805e-05,
1030
+ "loss": 0.1413,
1031
+ "step": 1390
1032
+ },
1033
+ {
1034
+ "epoch": 0.3789929615592853,
1035
+ "grad_norm": 2.4864439964294434,
1036
+ "learning_rate": 3.1866148292141075e-05,
1037
+ "loss": 0.1124,
1038
+ "step": 1400
1039
+ },
1040
+ {
1041
+ "epoch": 0.3789929615592853,
1042
+ "eval_loss": 0.08532879501581192,
1043
+ "eval_runtime": 42.5981,
1044
+ "eval_samples_per_second": 11.738,
1045
+ "eval_steps_per_second": 0.751,
1046
+ "step": 1400
1047
+ },
1048
+ {
1049
+ "epoch": 0.38170005414185165,
1050
+ "grad_norm": 3.092496156692505,
1051
+ "learning_rate": 3.1727297972785345e-05,
1052
+ "loss": 0.1108,
1053
+ "step": 1410
1054
+ },
1055
+ {
1056
+ "epoch": 0.38440714672441795,
1057
+ "grad_norm": 2.173140525817871,
1058
+ "learning_rate": 3.15884476534296e-05,
1059
+ "loss": 0.1207,
1060
+ "step": 1420
1061
+ },
1062
+ {
1063
+ "epoch": 0.3871142393069843,
1064
+ "grad_norm": 0.5932335257530212,
1065
+ "learning_rate": 3.144959733407387e-05,
1066
+ "loss": 0.076,
1067
+ "step": 1430
1068
+ },
1069
+ {
1070
+ "epoch": 0.38982133188955065,
1071
+ "grad_norm": 3.7685434818267822,
1072
+ "learning_rate": 3.131074701471813e-05,
1073
+ "loss": 0.1398,
1074
+ "step": 1440
1075
+ },
1076
+ {
1077
+ "epoch": 0.39252842447211694,
1078
+ "grad_norm": 0.32636183500289917,
1079
+ "learning_rate": 3.11718966953624e-05,
1080
+ "loss": 0.1572,
1081
+ "step": 1450
1082
+ },
1083
+ {
1084
+ "epoch": 0.3952355170546833,
1085
+ "grad_norm": 0.13235528767108917,
1086
+ "learning_rate": 3.1033046376006665e-05,
1087
+ "loss": 0.1008,
1088
+ "step": 1460
1089
+ },
1090
+ {
1091
+ "epoch": 0.3979426096372496,
1092
+ "grad_norm": 2.1381213665008545,
1093
+ "learning_rate": 3.0894196056650935e-05,
1094
+ "loss": 0.0962,
1095
+ "step": 1470
1096
+ },
1097
+ {
1098
+ "epoch": 0.40064970221981594,
1099
+ "grad_norm": 4.74404239654541,
1100
+ "learning_rate": 3.07553457372952e-05,
1101
+ "loss": 0.1425,
1102
+ "step": 1480
1103
+ },
1104
+ {
1105
+ "epoch": 0.40335679480238223,
1106
+ "grad_norm": 2.1296935081481934,
1107
+ "learning_rate": 3.061649541793946e-05,
1108
+ "loss": 0.1344,
1109
+ "step": 1490
1110
+ },
1111
+ {
1112
+ "epoch": 0.4060638873849486,
1113
+ "grad_norm": 0.6475437879562378,
1114
+ "learning_rate": 3.047764509858373e-05,
1115
+ "loss": 0.0721,
1116
+ "step": 1500
1117
+ },
1118
+ {
1119
+ "epoch": 0.4087709799675149,
1120
+ "grad_norm": 1.9603205919265747,
1121
+ "learning_rate": 3.0338794779227992e-05,
1122
+ "loss": 0.0785,
1123
+ "step": 1510
1124
+ },
1125
+ {
1126
+ "epoch": 0.41147807255008123,
1127
+ "grad_norm": 1.7050154209136963,
1128
+ "learning_rate": 3.019994445987226e-05,
1129
+ "loss": 0.0957,
1130
+ "step": 1520
1131
+ },
1132
+ {
1133
+ "epoch": 0.4141851651326475,
1134
+ "grad_norm": 1.2879791259765625,
1135
+ "learning_rate": 3.0061094140516528e-05,
1136
+ "loss": 0.0934,
1137
+ "step": 1530
1138
+ },
1139
+ {
1140
+ "epoch": 0.4168922577152139,
1141
+ "grad_norm": 3.113666296005249,
1142
+ "learning_rate": 2.992224382116079e-05,
1143
+ "loss": 0.0995,
1144
+ "step": 1540
1145
+ },
1146
+ {
1147
+ "epoch": 0.41959935029778017,
1148
+ "grad_norm": 3.428565263748169,
1149
+ "learning_rate": 2.9783393501805057e-05,
1150
+ "loss": 0.1517,
1151
+ "step": 1550
1152
+ },
1153
+ {
1154
+ "epoch": 0.4223064428803465,
1155
+ "grad_norm": 3.6910240650177,
1156
+ "learning_rate": 2.964454318244932e-05,
1157
+ "loss": 0.0736,
1158
+ "step": 1560
1159
+ },
1160
+ {
1161
+ "epoch": 0.4250135354629128,
1162
+ "grad_norm": 0.8222328424453735,
1163
+ "learning_rate": 2.9505692863093586e-05,
1164
+ "loss": 0.1011,
1165
+ "step": 1570
1166
+ },
1167
+ {
1168
+ "epoch": 0.42772062804547917,
1169
+ "grad_norm": 4.943286418914795,
1170
+ "learning_rate": 2.9366842543737855e-05,
1171
+ "loss": 0.1063,
1172
+ "step": 1580
1173
+ },
1174
+ {
1175
+ "epoch": 0.43042772062804546,
1176
+ "grad_norm": 2.139016628265381,
1177
+ "learning_rate": 2.9227992224382118e-05,
1178
+ "loss": 0.106,
1179
+ "step": 1590
1180
+ },
1181
+ {
1182
+ "epoch": 0.4331348132106118,
1183
+ "grad_norm": 0.6075900197029114,
1184
+ "learning_rate": 2.9089141905026384e-05,
1185
+ "loss": 0.1133,
1186
+ "step": 1600
1187
+ },
1188
+ {
1189
+ "epoch": 0.4331348132106118,
1190
+ "eval_loss": 0.0865432620048523,
1191
+ "eval_runtime": 40.5041,
1192
+ "eval_samples_per_second": 12.344,
1193
+ "eval_steps_per_second": 0.79,
1194
+ "step": 1600
1195
+ },
1196
+ {
1197
+ "epoch": 0.4358419057931781,
1198
+ "grad_norm": 3.410069704055786,
1199
+ "learning_rate": 2.8950291585670647e-05,
1200
+ "loss": 0.1261,
1201
+ "step": 1610
1202
+ },
1203
+ {
1204
+ "epoch": 0.43854899837574446,
1205
+ "grad_norm": 4.080138206481934,
1206
+ "learning_rate": 2.8811441266314916e-05,
1207
+ "loss": 0.1168,
1208
+ "step": 1620
1209
+ },
1210
+ {
1211
+ "epoch": 0.44125609095831075,
1212
+ "grad_norm": 1.9477506875991821,
1213
+ "learning_rate": 2.8672590946959176e-05,
1214
+ "loss": 0.118,
1215
+ "step": 1630
1216
+ },
1217
+ {
1218
+ "epoch": 0.4439631835408771,
1219
+ "grad_norm": 2.7804646492004395,
1220
+ "learning_rate": 2.8533740627603445e-05,
1221
+ "loss": 0.086,
1222
+ "step": 1640
1223
+ },
1224
+ {
1225
+ "epoch": 0.4466702761234434,
1226
+ "grad_norm": 0.9319368004798889,
1227
+ "learning_rate": 2.839489030824771e-05,
1228
+ "loss": 0.1907,
1229
+ "step": 1650
1230
+ },
1231
+ {
1232
+ "epoch": 0.44937736870600975,
1233
+ "grad_norm": 0.6775910258293152,
1234
+ "learning_rate": 2.8256039988891974e-05,
1235
+ "loss": 0.1561,
1236
+ "step": 1660
1237
+ },
1238
+ {
1239
+ "epoch": 0.45208446128857604,
1240
+ "grad_norm": 1.9625002145767212,
1241
+ "learning_rate": 2.8117189669536243e-05,
1242
+ "loss": 0.1348,
1243
+ "step": 1670
1244
+ },
1245
+ {
1246
+ "epoch": 0.4547915538711424,
1247
+ "grad_norm": 1.8503785133361816,
1248
+ "learning_rate": 2.7978339350180506e-05,
1249
+ "loss": 0.1543,
1250
+ "step": 1680
1251
+ },
1252
+ {
1253
+ "epoch": 0.4574986464537087,
1254
+ "grad_norm": 0.2912967801094055,
1255
+ "learning_rate": 2.7839489030824772e-05,
1256
+ "loss": 0.0855,
1257
+ "step": 1690
1258
+ },
1259
+ {
1260
+ "epoch": 0.46020573903627504,
1261
+ "grad_norm": 3.9681107997894287,
1262
+ "learning_rate": 2.7700638711469042e-05,
1263
+ "loss": 0.0645,
1264
+ "step": 1700
1265
+ },
1266
+ {
1267
+ "epoch": 0.4629128316188414,
1268
+ "grad_norm": 3.1310055255889893,
1269
+ "learning_rate": 2.75617883921133e-05,
1270
+ "loss": 0.111,
1271
+ "step": 1710
1272
+ },
1273
+ {
1274
+ "epoch": 0.4656199242014077,
1275
+ "grad_norm": 0.8168225288391113,
1276
+ "learning_rate": 2.742293807275757e-05,
1277
+ "loss": 0.0863,
1278
+ "step": 1720
1279
+ },
1280
+ {
1281
+ "epoch": 0.46832701678397404,
1282
+ "grad_norm": 1.4193518161773682,
1283
+ "learning_rate": 2.7284087753401833e-05,
1284
+ "loss": 0.1772,
1285
+ "step": 1730
1286
+ },
1287
+ {
1288
+ "epoch": 0.47103410936654033,
1289
+ "grad_norm": 2.3890655040740967,
1290
+ "learning_rate": 2.71452374340461e-05,
1291
+ "loss": 0.1319,
1292
+ "step": 1740
1293
+ },
1294
+ {
1295
+ "epoch": 0.4737412019491067,
1296
+ "grad_norm": 1.2436351776123047,
1297
+ "learning_rate": 2.7006387114690362e-05,
1298
+ "loss": 0.075,
1299
+ "step": 1750
1300
+ },
1301
+ {
1302
+ "epoch": 0.476448294531673,
1303
+ "grad_norm": 1.5610893964767456,
1304
+ "learning_rate": 2.6867536795334632e-05,
1305
+ "loss": 0.0757,
1306
+ "step": 1760
1307
+ },
1308
+ {
1309
+ "epoch": 0.47915538711423933,
1310
+ "grad_norm": 5.392906188964844,
1311
+ "learning_rate": 2.6728686475978898e-05,
1312
+ "loss": 0.1657,
1313
+ "step": 1770
1314
+ },
1315
+ {
1316
+ "epoch": 0.4818624796968056,
1317
+ "grad_norm": 1.157339096069336,
1318
+ "learning_rate": 2.658983615662316e-05,
1319
+ "loss": 0.0908,
1320
+ "step": 1780
1321
+ },
1322
+ {
1323
+ "epoch": 0.484569572279372,
1324
+ "grad_norm": 2.2501959800720215,
1325
+ "learning_rate": 2.6450985837267427e-05,
1326
+ "loss": 0.0982,
1327
+ "step": 1790
1328
+ },
1329
+ {
1330
+ "epoch": 0.48727666486193827,
1331
+ "grad_norm": 2.779853105545044,
1332
+ "learning_rate": 2.631213551791169e-05,
1333
+ "loss": 0.0733,
1334
+ "step": 1800
1335
+ },
1336
+ {
1337
+ "epoch": 0.48727666486193827,
1338
+ "eval_loss": 0.10408055782318115,
1339
+ "eval_runtime": 40.9974,
1340
+ "eval_samples_per_second": 12.196,
1341
+ "eval_steps_per_second": 0.781,
1342
+ "step": 1800
1343
+ },
1344
+ {
1345
+ "epoch": 0.4899837574445046,
1346
+ "grad_norm": 0.5903124809265137,
1347
+ "learning_rate": 2.617328519855596e-05,
1348
+ "loss": 0.1078,
1349
+ "step": 1810
1350
+ },
1351
+ {
1352
+ "epoch": 0.4926908500270709,
1353
+ "grad_norm": 2.7150614261627197,
1354
+ "learning_rate": 2.6034434879200225e-05,
1355
+ "loss": 0.143,
1356
+ "step": 1820
1357
+ },
1358
+ {
1359
+ "epoch": 0.49539794260963727,
1360
+ "grad_norm": 1.6853638887405396,
1361
+ "learning_rate": 2.5895584559844488e-05,
1362
+ "loss": 0.0694,
1363
+ "step": 1830
1364
+ },
1365
+ {
1366
+ "epoch": 0.49810503519220356,
1367
+ "grad_norm": 2.398505449295044,
1368
+ "learning_rate": 2.5756734240488754e-05,
1369
+ "loss": 0.0983,
1370
+ "step": 1840
1371
+ },
1372
+ {
1373
+ "epoch": 0.5008121277747699,
1374
+ "grad_norm": 1.515865683555603,
1375
+ "learning_rate": 2.5617883921133017e-05,
1376
+ "loss": 0.0916,
1377
+ "step": 1850
1378
+ },
1379
+ {
1380
+ "epoch": 0.5035192203573362,
1381
+ "grad_norm": 2.172353506088257,
1382
+ "learning_rate": 2.5479033601777286e-05,
1383
+ "loss": 0.0996,
1384
+ "step": 1860
1385
+ },
1386
+ {
1387
+ "epoch": 0.5062263129399025,
1388
+ "grad_norm": 0.6916144490242004,
1389
+ "learning_rate": 2.5340183282421552e-05,
1390
+ "loss": 0.0751,
1391
+ "step": 1870
1392
+ },
1393
+ {
1394
+ "epoch": 0.5089334055224689,
1395
+ "grad_norm": 2.9043545722961426,
1396
+ "learning_rate": 2.5201332963065815e-05,
1397
+ "loss": 0.127,
1398
+ "step": 1880
1399
+ },
1400
+ {
1401
+ "epoch": 0.5116404981050352,
1402
+ "grad_norm": 2.2535910606384277,
1403
+ "learning_rate": 2.5062482643710085e-05,
1404
+ "loss": 0.0918,
1405
+ "step": 1890
1406
+ },
1407
+ {
1408
+ "epoch": 0.5143475906876015,
1409
+ "grad_norm": 0.217793807387352,
1410
+ "learning_rate": 2.4923632324354347e-05,
1411
+ "loss": 0.075,
1412
+ "step": 1900
1413
+ },
1414
+ {
1415
+ "epoch": 0.5170546832701678,
1416
+ "grad_norm": 0.3639616072177887,
1417
+ "learning_rate": 2.4784782004998614e-05,
1418
+ "loss": 0.1307,
1419
+ "step": 1910
1420
+ },
1421
+ {
1422
+ "epoch": 0.5197617758527342,
1423
+ "grad_norm": 2.467031240463257,
1424
+ "learning_rate": 2.464593168564288e-05,
1425
+ "loss": 0.084,
1426
+ "step": 1920
1427
+ },
1428
+ {
1429
+ "epoch": 0.5224688684353005,
1430
+ "grad_norm": 1.7595791816711426,
1431
+ "learning_rate": 2.4507081366287142e-05,
1432
+ "loss": 0.1313,
1433
+ "step": 1930
1434
+ },
1435
+ {
1436
+ "epoch": 0.5251759610178668,
1437
+ "grad_norm": 1.6658025979995728,
1438
+ "learning_rate": 2.436823104693141e-05,
1439
+ "loss": 0.1204,
1440
+ "step": 1940
1441
+ },
1442
+ {
1443
+ "epoch": 0.5278830536004331,
1444
+ "grad_norm": 4.211353302001953,
1445
+ "learning_rate": 2.4229380727575675e-05,
1446
+ "loss": 0.1387,
1447
+ "step": 1950
1448
+ },
1449
+ {
1450
+ "epoch": 0.5305901461829995,
1451
+ "grad_norm": 1.0122289657592773,
1452
+ "learning_rate": 2.409053040821994e-05,
1453
+ "loss": 0.0633,
1454
+ "step": 1960
1455
+ },
1456
+ {
1457
+ "epoch": 0.5332972387655658,
1458
+ "grad_norm": 3.7922515869140625,
1459
+ "learning_rate": 2.3951680088864207e-05,
1460
+ "loss": 0.0756,
1461
+ "step": 1970
1462
+ },
1463
+ {
1464
+ "epoch": 0.5360043313481321,
1465
+ "grad_norm": 1.1087652444839478,
1466
+ "learning_rate": 2.381282976950847e-05,
1467
+ "loss": 0.0952,
1468
+ "step": 1980
1469
+ },
1470
+ {
1471
+ "epoch": 0.5387114239306985,
1472
+ "grad_norm": 5.007467269897461,
1473
+ "learning_rate": 2.3673979450152736e-05,
1474
+ "loss": 0.1346,
1475
+ "step": 1990
1476
+ },
1477
+ {
1478
+ "epoch": 0.5414185165132648,
1479
+ "grad_norm": 0.10054272413253784,
1480
+ "learning_rate": 2.3535129130797002e-05,
1481
+ "loss": 0.1087,
1482
+ "step": 2000
1483
+ },
1484
+ {
1485
+ "epoch": 0.5414185165132648,
1486
+ "eval_loss": 0.08383670449256897,
1487
+ "eval_runtime": 39.1001,
1488
+ "eval_samples_per_second": 12.788,
1489
+ "eval_steps_per_second": 0.818,
1490
+ "step": 2000
1491
+ },
1492
+ {
1493
+ "epoch": 0.5441256090958311,
1494
+ "grad_norm": 0.11041736602783203,
1495
+ "learning_rate": 2.3396278811441265e-05,
1496
+ "loss": 0.0928,
1497
+ "step": 2010
1498
+ },
1499
+ {
1500
+ "epoch": 0.5468327016783974,
1501
+ "grad_norm": 1.4950013160705566,
1502
+ "learning_rate": 2.3257428492085534e-05,
1503
+ "loss": 0.0658,
1504
+ "step": 2020
1505
+ },
1506
+ {
1507
+ "epoch": 0.5495397942609638,
1508
+ "grad_norm": 0.5610796213150024,
1509
+ "learning_rate": 2.31185781727298e-05,
1510
+ "loss": 0.1099,
1511
+ "step": 2030
1512
+ },
1513
+ {
1514
+ "epoch": 0.5522468868435301,
1515
+ "grad_norm": 2.5905027389526367,
1516
+ "learning_rate": 2.2979727853374063e-05,
1517
+ "loss": 0.0818,
1518
+ "step": 2040
1519
+ },
1520
+ {
1521
+ "epoch": 0.5549539794260964,
1522
+ "grad_norm": 1.3932690620422363,
1523
+ "learning_rate": 2.284087753401833e-05,
1524
+ "loss": 0.0814,
1525
+ "step": 2050
1526
+ },
1527
+ {
1528
+ "epoch": 0.5576610720086627,
1529
+ "grad_norm": 4.6623854637146,
1530
+ "learning_rate": 2.2702027214662595e-05,
1531
+ "loss": 0.0849,
1532
+ "step": 2060
1533
+ },
1534
+ {
1535
+ "epoch": 0.5603681645912291,
1536
+ "grad_norm": 0.45536836981773376,
1537
+ "learning_rate": 2.2563176895306858e-05,
1538
+ "loss": 0.1134,
1539
+ "step": 2070
1540
+ },
1541
+ {
1542
+ "epoch": 0.5630752571737954,
1543
+ "grad_norm": 1.5323799848556519,
1544
+ "learning_rate": 2.2424326575951127e-05,
1545
+ "loss": 0.0767,
1546
+ "step": 2080
1547
+ },
1548
+ {
1549
+ "epoch": 0.5657823497563617,
1550
+ "grad_norm": 2.6839871406555176,
1551
+ "learning_rate": 2.228547625659539e-05,
1552
+ "loss": 0.16,
1553
+ "step": 2090
1554
+ },
1555
+ {
1556
+ "epoch": 0.568489442338928,
1557
+ "grad_norm": 2.929471731185913,
1558
+ "learning_rate": 2.2146625937239656e-05,
1559
+ "loss": 0.0812,
1560
+ "step": 2100
1561
+ },
1562
+ {
1563
+ "epoch": 0.5711965349214944,
1564
+ "grad_norm": 3.792933702468872,
1565
+ "learning_rate": 2.2007775617883922e-05,
1566
+ "loss": 0.0896,
1567
+ "step": 2110
1568
+ },
1569
+ {
1570
+ "epoch": 0.5739036275040607,
1571
+ "grad_norm": 0.5622618198394775,
1572
+ "learning_rate": 2.1868925298528185e-05,
1573
+ "loss": 0.0672,
1574
+ "step": 2120
1575
+ },
1576
+ {
1577
+ "epoch": 0.576610720086627,
1578
+ "grad_norm": 0.7235398888587952,
1579
+ "learning_rate": 2.1730074979172455e-05,
1580
+ "loss": 0.1283,
1581
+ "step": 2130
1582
+ },
1583
+ {
1584
+ "epoch": 0.5793178126691932,
1585
+ "grad_norm": 1.5156288146972656,
1586
+ "learning_rate": 2.159122465981672e-05,
1587
+ "loss": 0.0606,
1588
+ "step": 2140
1589
+ },
1590
+ {
1591
+ "epoch": 0.5820249052517596,
1592
+ "grad_norm": 1.9470716714859009,
1593
+ "learning_rate": 2.1452374340460984e-05,
1594
+ "loss": 0.0725,
1595
+ "step": 2150
1596
+ },
1597
+ {
1598
+ "epoch": 0.584731997834326,
1599
+ "grad_norm": 3.0913169384002686,
1600
+ "learning_rate": 2.131352402110525e-05,
1601
+ "loss": 0.1057,
1602
+ "step": 2160
1603
+ },
1604
+ {
1605
+ "epoch": 0.5874390904168922,
1606
+ "grad_norm": 0.6105153560638428,
1607
+ "learning_rate": 2.1174673701749516e-05,
1608
+ "loss": 0.1056,
1609
+ "step": 2170
1610
+ },
1611
+ {
1612
+ "epoch": 0.5901461829994585,
1613
+ "grad_norm": 6.130599498748779,
1614
+ "learning_rate": 2.103582338239378e-05,
1615
+ "loss": 0.1188,
1616
+ "step": 2180
1617
+ },
1618
+ {
1619
+ "epoch": 0.5928532755820249,
1620
+ "grad_norm": 3.15583872795105,
1621
+ "learning_rate": 2.0896973063038048e-05,
1622
+ "loss": 0.0966,
1623
+ "step": 2190
1624
+ },
1625
+ {
1626
+ "epoch": 0.5955603681645912,
1627
+ "grad_norm": 3.5246055126190186,
1628
+ "learning_rate": 2.075812274368231e-05,
1629
+ "loss": 0.0945,
1630
+ "step": 2200
1631
+ },
1632
+ {
1633
+ "epoch": 0.5955603681645912,
1634
+ "eval_loss": 0.09864608943462372,
1635
+ "eval_runtime": 42.9692,
1636
+ "eval_samples_per_second": 11.636,
1637
+ "eval_steps_per_second": 0.745,
1638
+ "step": 2200
1639
+ }
1640
+ ],
1641
+ "logging_steps": 10,
1642
+ "max_steps": 3694,
1643
+ "num_input_tokens_seen": 0,
1644
+ "num_train_epochs": 1,
1645
+ "save_steps": 200,
1646
+ "stateful_callbacks": {
1647
+ "TrainerControl": {
1648
+ "args": {
1649
+ "should_epoch_stop": false,
1650
+ "should_evaluate": false,
1651
+ "should_log": false,
1652
+ "should_save": true,
1653
+ "should_training_stop": false
1654
+ },
1655
+ "attributes": {}
1656
+ }
1657
+ },
1658
+ "total_flos": 1.0608419004638822e+18,
1659
+ "train_batch_size": 8,
1660
+ "trial_name": null,
1661
+ "trial_params": null
1662
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9bb4d622c9cac10340a7a574f4f6c9af9831334a109246469462e782cf87b9a
3
+ size 5777