AbdoTW commited on
Commit
568e8a2
·
verified ·
1 Parent(s): 6279980

Upload checkpoint-500

Browse files
checkpoints-ocrTaskJson/checkpoint-500/README.md ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-3-4b-it
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:google/gemma-3-4b-it
7
+ - llama-factory
8
+ - lora
9
+ - transformers
10
+ ---
11
+
12
+ # Model Card for Model ID
13
+
14
+ <!-- Provide a quick summary of what the model is/does. -->
15
+
16
+
17
+
18
+ ## Model Details
19
+
20
+ ### Model Description
21
+
22
+ <!-- Provide a longer summary of what this model is. -->
23
+
24
+
25
+
26
+ - **Developed by:** [More Information Needed]
27
+ - **Funded by [optional]:** [More Information Needed]
28
+ - **Shared by [optional]:** [More Information Needed]
29
+ - **Model type:** [More Information Needed]
30
+ - **Language(s) (NLP):** [More Information Needed]
31
+ - **License:** [More Information Needed]
32
+ - **Finetuned from model [optional]:** [More Information Needed]
33
+
34
+ ### Model Sources [optional]
35
+
36
+ <!-- Provide the basic links for the model. -->
37
+
38
+ - **Repository:** [More Information Needed]
39
+ - **Paper [optional]:** [More Information Needed]
40
+ - **Demo [optional]:** [More Information Needed]
41
+
42
+ ## Uses
43
+
44
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
45
+
46
+ ### Direct Use
47
+
48
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Downstream Use [optional]
53
+
54
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
55
+
56
+ [More Information Needed]
57
+
58
+ ### Out-of-Scope Use
59
+
60
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ## Bias, Risks, and Limitations
65
+
66
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
67
+
68
+ [More Information Needed]
69
+
70
+ ### Recommendations
71
+
72
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
73
+
74
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
75
+
76
+ ## How to Get Started with the Model
77
+
78
+ Use the code below to get started with the model.
79
+
80
+ [More Information Needed]
81
+
82
+ ## Training Details
83
+
84
+ ### Training Data
85
+
86
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
87
+
88
+ [More Information Needed]
89
+
90
+ ### Training Procedure
91
+
92
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
93
+
94
+ #### Preprocessing [optional]
95
+
96
+ [More Information Needed]
97
+
98
+
99
+ #### Training Hyperparameters
100
+
101
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
102
+
103
+ #### Speeds, Sizes, Times [optional]
104
+
105
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
106
+
107
+ [More Information Needed]
108
+
109
+ ## Evaluation
110
+
111
+ <!-- This section describes the evaluation protocols and provides the results. -->
112
+
113
+ ### Testing Data, Factors & Metrics
114
+
115
+ #### Testing Data
116
+
117
+ <!-- This should link to a Dataset Card if possible. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Factors
122
+
123
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
124
+
125
+ [More Information Needed]
126
+
127
+ #### Metrics
128
+
129
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
130
+
131
+ [More Information Needed]
132
+
133
+ ### Results
134
+
135
+ [More Information Needed]
136
+
137
+ #### Summary
138
+
139
+
140
+
141
+ ## Model Examination [optional]
142
+
143
+ <!-- Relevant interpretability work for the model goes here -->
144
+
145
+ [More Information Needed]
146
+
147
+ ## Environmental Impact
148
+
149
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
150
+
151
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
152
+
153
+ - **Hardware Type:** [More Information Needed]
154
+ - **Hours used:** [More Information Needed]
155
+ - **Cloud Provider:** [More Information Needed]
156
+ - **Compute Region:** [More Information Needed]
157
+ - **Carbon Emitted:** [More Information Needed]
158
+
159
+ ## Technical Specifications [optional]
160
+
161
+ ### Model Architecture and Objective
162
+
163
+ [More Information Needed]
164
+
165
+ ### Compute Infrastructure
166
+
167
+ [More Information Needed]
168
+
169
+ #### Hardware
170
+
171
+ [More Information Needed]
172
+
173
+ #### Software
174
+
175
+ [More Information Needed]
176
+
177
+ ## Citation [optional]
178
+
179
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
180
+
181
+ **BibTeX:**
182
+
183
+ [More Information Needed]
184
+
185
+ **APA:**
186
+
187
+ [More Information Needed]
188
+
189
+ ## Glossary [optional]
190
+
191
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
192
+
193
+ [More Information Needed]
194
+
195
+ ## More Information [optional]
196
+
197
+ [More Information Needed]
198
+
199
+ ## Model Card Authors [optional]
200
+
201
+ [More Information Needed]
202
+
203
+ ## Model Card Contact
204
+
205
+ [More Information Needed]
206
+ ### Framework versions
207
+
208
+ - PEFT 0.18.1
checkpoints-ocrTaskJson/checkpoint-500/adapter_config.json ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "google/gemma-3-4b-it",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 192,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.0,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": null,
25
+ "peft_type": "LORA",
26
+ "peft_version": "0.18.1",
27
+ "qalora_group_size": 16,
28
+ "r": 96,
29
+ "rank_pattern": {},
30
+ "revision": null,
31
+ "target_modules": [
32
+ "language_model.layers.24.self_attn.q_proj",
33
+ "29.self_attn.v_proj",
34
+ "30.self_attn.k_proj",
35
+ "up_proj",
36
+ "language_model.layers.17.self_attn.k_proj",
37
+ "27.self_attn.v_proj",
38
+ "language_model.layers.9.self_attn.k_proj",
39
+ "language_model.layers.26.self_attn.v_proj",
40
+ "language_model.layers.8.self_attn.k_proj",
41
+ "language_model.layers.6.self_attn.q_proj",
42
+ "language_model.layers.3.self_attn.q_proj",
43
+ "language_model.layers.15.self_attn.v_proj",
44
+ "language_model.layers.20.self_attn.v_proj",
45
+ "language_model.layers.17.self_attn.v_proj",
46
+ "28.self_attn.k_proj",
47
+ "language_model.layers.11.self_attn.q_proj",
48
+ "language_model.layers.19.self_attn.k_proj",
49
+ "language_model.layers.1.self_attn.q_proj",
50
+ "language_model.layers.13.self_attn.v_proj",
51
+ "language_model.layers.23.self_attn.v_proj",
52
+ "language_model.layers.7.self_attn.k_proj",
53
+ "language_model.layers.0.self_attn.k_proj",
54
+ "language_model.layers.24.self_attn.v_proj",
55
+ "language_model.layers.11.self_attn.k_proj",
56
+ "language_model.layers.26.self_attn.k_proj",
57
+ "language_model.layers.18.self_attn.q_proj",
58
+ "language_model.layers.3.self_attn.k_proj",
59
+ "language_model.layers.4.self_attn.q_proj",
60
+ "language_model.layers.9.self_attn.q_proj",
61
+ "language_model.layers.24.self_attn.k_proj",
62
+ "language_model.layers.3.self_attn.v_proj",
63
+ "30.self_attn.q_proj",
64
+ "language_model.layers.17.self_attn.q_proj",
65
+ "language_model.layers.16.self_attn.k_proj",
66
+ "language_model.layers.25.self_attn.k_proj",
67
+ "language_model.layers.13.self_attn.q_proj",
68
+ "language_model.layers.19.self_attn.q_proj",
69
+ "language_model.layers.23.self_attn.q_proj",
70
+ "language_model.layers.14.self_attn.q_proj",
71
+ "language_model.layers.22.self_attn.k_proj",
72
+ "language_model.layers.10.self_attn.q_proj",
73
+ "31.self_attn.q_proj",
74
+ "down_proj",
75
+ "language_model.layers.21.self_attn.v_proj",
76
+ "language_model.layers.12.self_attn.q_proj",
77
+ "language_model.layers.14.self_attn.v_proj",
78
+ "language_model.layers.4.self_attn.v_proj",
79
+ "language_model.layers.6.self_attn.v_proj",
80
+ "language_model.layers.8.self_attn.v_proj",
81
+ "language_model.layers.18.self_attn.k_proj",
82
+ "language_model.layers.25.self_attn.v_proj",
83
+ "o_proj",
84
+ "language_model.layers.5.self_attn.q_proj",
85
+ "29.self_attn.k_proj",
86
+ "language_model.layers.15.self_attn.k_proj",
87
+ "language_model.layers.9.self_attn.v_proj",
88
+ "language_model.layers.0.self_attn.q_proj",
89
+ "33.self_attn.q_proj",
90
+ "29.self_attn.q_proj",
91
+ "language_model.layers.11.self_attn.v_proj",
92
+ "31.self_attn.k_proj",
93
+ "language_model.layers.14.self_attn.k_proj",
94
+ "27.self_attn.k_proj",
95
+ "language_model.layers.21.self_attn.k_proj",
96
+ "language_model.layers.2.self_attn.k_proj",
97
+ "language_model.layers.19.self_attn.v_proj",
98
+ "language_model.layers.20.self_attn.q_proj",
99
+ "language_model.layers.1.self_attn.k_proj",
100
+ "32.self_attn.q_proj",
101
+ "language_model.layers.23.self_attn.k_proj",
102
+ "language_model.layers.13.self_attn.k_proj",
103
+ "language_model.layers.2.self_attn.v_proj",
104
+ "28.self_attn.q_proj",
105
+ "language_model.layers.5.self_attn.v_proj",
106
+ "language_model.layers.16.self_attn.v_proj",
107
+ "32.self_attn.v_proj",
108
+ "33.self_attn.k_proj",
109
+ "language_model.layers.7.self_attn.v_proj",
110
+ "language_model.layers.7.self_attn.q_proj",
111
+ "language_model.layers.22.self_attn.q_proj",
112
+ "language_model.layers.18.self_attn.v_proj",
113
+ "language_model.layers.10.self_attn.v_proj",
114
+ "language_model.layers.6.self_attn.k_proj",
115
+ "language_model.layers.20.self_attn.k_proj",
116
+ "33.self_attn.v_proj",
117
+ "30.self_attn.v_proj",
118
+ "language_model.layers.22.self_attn.v_proj",
119
+ "language_model.layers.0.self_attn.v_proj",
120
+ "gate_proj",
121
+ "27.self_attn.q_proj",
122
+ "language_model.layers.25.self_attn.q_proj",
123
+ "language_model.layers.26.self_attn.q_proj",
124
+ "32.self_attn.k_proj",
125
+ "28.self_attn.v_proj",
126
+ "language_model.layers.12.self_attn.k_proj",
127
+ "language_model.layers.21.self_attn.q_proj",
128
+ "language_model.layers.12.self_attn.v_proj",
129
+ "language_model.layers.1.self_attn.v_proj",
130
+ "language_model.layers.2.self_attn.q_proj",
131
+ "language_model.layers.8.self_attn.q_proj",
132
+ "language_model.layers.5.self_attn.k_proj",
133
+ "language_model.layers.4.self_attn.k_proj",
134
+ "31.self_attn.v_proj",
135
+ "language_model.layers.15.self_attn.q_proj",
136
+ "language_model.layers.16.self_attn.q_proj",
137
+ "language_model.layers.10.self_attn.k_proj"
138
+ ],
139
+ "target_parameters": null,
140
+ "task_type": "CAUSAL_LM",
141
+ "trainable_token_indices": null,
142
+ "use_dora": false,
143
+ "use_qalora": false,
144
+ "use_rslora": false
145
+ }
checkpoints-ocrTaskJson/checkpoint-500/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca3165d8d24c52ea8c6ff91fafa16e9d850ddca9b07b7e762e5a25aff5c9fc2a
3
+ size 715331368
checkpoints-ocrTaskJson/checkpoint-500/added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "<image_soft_token>": 262144
3
+ }
checkpoints-ocrTaskJson/checkpoint-500/chat_template.jinja ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {{ bos_token }}
2
+ {%- if messages[0]['role'] == 'system' -%}
3
+ {%- if messages[0]['content'] is string -%}
4
+ {%- set first_user_prefix = messages[0]['content'] + '
5
+
6
+ ' -%}
7
+ {%- else -%}
8
+ {%- set first_user_prefix = messages[0]['content'][0]['text'] + '
9
+
10
+ ' -%}
11
+ {%- endif -%}
12
+ {%- set loop_messages = messages[1:] -%}
13
+ {%- else -%}
14
+ {%- set first_user_prefix = "" -%}
15
+ {%- set loop_messages = messages -%}
16
+ {%- endif -%}
17
+ {%- for message in loop_messages -%}
18
+ {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}
19
+ {{ raise_exception("Conversation roles must alternate user/assistant/user/assistant/...") }}
20
+ {%- endif -%}
21
+ {%- if (message['role'] == 'assistant') -%}
22
+ {%- set role = "model" -%}
23
+ {%- else -%}
24
+ {%- set role = message['role'] -%}
25
+ {%- endif -%}
26
+ {{ '<start_of_turn>' + role + '
27
+ ' + (first_user_prefix if loop.first else "") }}
28
+ {%- if message['content'] is string -%}
29
+ {{ message['content'] | trim }}
30
+ {%- elif message['content'] is iterable -%}
31
+ {%- for item in message['content'] -%}
32
+ {%- if item['type'] == 'image' -%}
33
+ {{ '<start_of_image>' }}
34
+ {%- elif item['type'] == 'text' -%}
35
+ {{ item['text'] | trim }}
36
+ {%- endif -%}
37
+ {%- endfor -%}
38
+ {%- else -%}
39
+ {{ raise_exception("Invalid content type") }}
40
+ {%- endif -%}
41
+ {{ '<end_of_turn>
42
+ ' }}
43
+ {%- endfor -%}
44
+ {%- if add_generation_prompt -%}
45
+ {{'<start_of_turn>model
46
+ '}}
47
+ {%- endif -%}
checkpoints-ocrTaskJson/checkpoint-500/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d634451210e872ef1471a4ab11ca2838e8116b60ef496abc23282b13740b9f01
3
+ size 1430929015
checkpoints-ocrTaskJson/checkpoint-500/preprocessor_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": null,
3
+ "do_normalize": true,
4
+ "do_pan_and_scan": null,
5
+ "do_rescale": true,
6
+ "do_resize": true,
7
+ "image_mean": [
8
+ 0.5,
9
+ 0.5,
10
+ 0.5
11
+ ],
12
+ "image_processor_type": "Gemma3ImageProcessor",
13
+ "image_seq_length": 256,
14
+ "image_std": [
15
+ 0.5,
16
+ 0.5,
17
+ 0.5
18
+ ],
19
+ "pan_and_scan_max_num_crops": null,
20
+ "pan_and_scan_min_crop_size": null,
21
+ "pan_and_scan_min_ratio_to_activate": null,
22
+ "processor_class": "Gemma3Processor",
23
+ "resample": 2,
24
+ "rescale_factor": 0.00392156862745098,
25
+ "size": {
26
+ "height": 896,
27
+ "width": 896
28
+ }
29
+ }
checkpoints-ocrTaskJson/checkpoint-500/processor_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "image_seq_length": 256,
3
+ "processor_class": "Gemma3Processor"
4
+ }
checkpoints-ocrTaskJson/checkpoint-500/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75927f187e0b4eb1609bb7f9f81a0eafefdf19b7e5c2df3fa3f74b6872fc1e09
3
+ size 14645
checkpoints-ocrTaskJson/checkpoint-500/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e46ebbd018bf8e29c1fd71c099b641675fa468a2ca5b41dc493e502c0af86a1
3
+ size 1465
checkpoints-ocrTaskJson/checkpoint-500/special_tokens_map.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "boi_token": "<start_of_image>",
3
+ "bos_token": {
4
+ "content": "<bos>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "eoi_token": "<end_of_image>",
11
+ "eos_token": {
12
+ "content": "<end_of_turn>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "image_token": "<image_soft_token>",
19
+ "pad_token": {
20
+ "content": "<pad>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ "unk_token": {
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
checkpoints-ocrTaskJson/checkpoint-500/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
3
+ size 4689074
checkpoints-ocrTaskJson/checkpoint-500/tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoints-ocrTaskJson/checkpoint-500/trainer_state.json ADDED
@@ -0,0 +1,254 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 3.9682539682539684,
6
+ "eval_steps": 50,
7
+ "global_step": 500,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.1984126984126984,
14
+ "grad_norm": 1.0922354459762573,
15
+ "learning_rate": 9.523809523809523e-06,
16
+ "loss": 0.8473,
17
+ "step": 25
18
+ },
19
+ {
20
+ "epoch": 0.3968253968253968,
21
+ "grad_norm": 0.7849488854408264,
22
+ "learning_rate": 1.9444444444444445e-05,
23
+ "loss": 0.3692,
24
+ "step": 50
25
+ },
26
+ {
27
+ "epoch": 0.3968253968253968,
28
+ "eval_loss": 0.3103943467140198,
29
+ "eval_runtime": 10.1545,
30
+ "eval_samples_per_second": 4.136,
31
+ "eval_steps_per_second": 2.068,
32
+ "step": 50
33
+ },
34
+ {
35
+ "epoch": 0.5952380952380952,
36
+ "grad_norm": 0.4042750298976898,
37
+ "learning_rate": 2.9365079365079366e-05,
38
+ "loss": 0.249,
39
+ "step": 75
40
+ },
41
+ {
42
+ "epoch": 0.7936507936507936,
43
+ "grad_norm": 0.505721926689148,
44
+ "learning_rate": 3.928571428571429e-05,
45
+ "loss": 0.2248,
46
+ "step": 100
47
+ },
48
+ {
49
+ "epoch": 0.7936507936507936,
50
+ "eval_loss": 0.23757970333099365,
51
+ "eval_runtime": 10.1307,
52
+ "eval_samples_per_second": 4.146,
53
+ "eval_steps_per_second": 2.073,
54
+ "step": 100
55
+ },
56
+ {
57
+ "epoch": 0.9920634920634921,
58
+ "grad_norm": 0.39118653535842896,
59
+ "learning_rate": 4.9206349206349204e-05,
60
+ "loss": 0.1997,
61
+ "step": 125
62
+ },
63
+ {
64
+ "epoch": 1.1904761904761905,
65
+ "grad_norm": 0.4834806025028229,
66
+ "learning_rate": 5.912698412698413e-05,
67
+ "loss": 0.1717,
68
+ "step": 150
69
+ },
70
+ {
71
+ "epoch": 1.1904761904761905,
72
+ "eval_loss": 0.20604592561721802,
73
+ "eval_runtime": 10.1545,
74
+ "eval_samples_per_second": 4.136,
75
+ "eval_steps_per_second": 2.068,
76
+ "step": 150
77
+ },
78
+ {
79
+ "epoch": 1.3888888888888888,
80
+ "grad_norm": 0.38224297761917114,
81
+ "learning_rate": 6.904761904761905e-05,
82
+ "loss": 0.1684,
83
+ "step": 175
84
+ },
85
+ {
86
+ "epoch": 1.5873015873015874,
87
+ "grad_norm": 0.35443320870399475,
88
+ "learning_rate": 7.896825396825397e-05,
89
+ "loss": 0.1502,
90
+ "step": 200
91
+ },
92
+ {
93
+ "epoch": 1.5873015873015874,
94
+ "eval_loss": 0.19332902133464813,
95
+ "eval_runtime": 10.0977,
96
+ "eval_samples_per_second": 4.159,
97
+ "eval_steps_per_second": 2.08,
98
+ "step": 200
99
+ },
100
+ {
101
+ "epoch": 1.7857142857142856,
102
+ "grad_norm": 0.3137013614177704,
103
+ "learning_rate": 8.888888888888889e-05,
104
+ "loss": 0.1549,
105
+ "step": 225
106
+ },
107
+ {
108
+ "epoch": 1.9841269841269842,
109
+ "grad_norm": 0.30676764249801636,
110
+ "learning_rate": 9.880952380952381e-05,
111
+ "loss": 0.1449,
112
+ "step": 250
113
+ },
114
+ {
115
+ "epoch": 1.9841269841269842,
116
+ "eval_loss": 0.17996199429035187,
117
+ "eval_runtime": 10.0168,
118
+ "eval_samples_per_second": 4.193,
119
+ "eval_steps_per_second": 2.096,
120
+ "step": 250
121
+ },
122
+ {
123
+ "epoch": 2.1825396825396823,
124
+ "grad_norm": 0.31375375390052795,
125
+ "learning_rate": 9.997678517546382e-05,
126
+ "loss": 0.1111,
127
+ "step": 275
128
+ },
129
+ {
130
+ "epoch": 2.380952380952381,
131
+ "grad_norm": 0.2725541293621063,
132
+ "learning_rate": 9.989407561073525e-05,
133
+ "loss": 0.1016,
134
+ "step": 300
135
+ },
136
+ {
137
+ "epoch": 2.380952380952381,
138
+ "eval_loss": 0.17762605845928192,
139
+ "eval_runtime": 10.0289,
140
+ "eval_samples_per_second": 4.188,
141
+ "eval_steps_per_second": 2.094,
142
+ "step": 300
143
+ },
144
+ {
145
+ "epoch": 2.5793650793650795,
146
+ "grad_norm": 0.31162211298942566,
147
+ "learning_rate": 9.975153876827008e-05,
148
+ "loss": 0.1044,
149
+ "step": 325
150
+ },
151
+ {
152
+ "epoch": 2.7777777777777777,
153
+ "grad_norm": 0.2855686843395233,
154
+ "learning_rate": 9.954934556197257e-05,
155
+ "loss": 0.1087,
156
+ "step": 350
157
+ },
158
+ {
159
+ "epoch": 2.7777777777777777,
160
+ "eval_loss": 0.1633986085653305,
161
+ "eval_runtime": 10.056,
162
+ "eval_samples_per_second": 4.177,
163
+ "eval_steps_per_second": 2.088,
164
+ "step": 350
165
+ },
166
+ {
167
+ "epoch": 2.9761904761904763,
168
+ "grad_norm": 0.33260759711265564,
169
+ "learning_rate": 9.928773843884593e-05,
170
+ "loss": 0.1028,
171
+ "step": 375
172
+ },
173
+ {
174
+ "epoch": 3.1746031746031744,
175
+ "grad_norm": 0.3426005244255066,
176
+ "learning_rate": 9.896703108827759e-05,
177
+ "loss": 0.0725,
178
+ "step": 400
179
+ },
180
+ {
181
+ "epoch": 3.1746031746031744,
182
+ "eval_loss": 0.17406687140464783,
183
+ "eval_runtime": 10.0302,
184
+ "eval_samples_per_second": 4.187,
185
+ "eval_steps_per_second": 2.094,
186
+ "step": 400
187
+ },
188
+ {
189
+ "epoch": 3.373015873015873,
190
+ "grad_norm": 0.3507950007915497,
191
+ "learning_rate": 9.85876080658986e-05,
192
+ "loss": 0.0698,
193
+ "step": 425
194
+ },
195
+ {
196
+ "epoch": 3.571428571428571,
197
+ "grad_norm": 0.3167800009250641,
198
+ "learning_rate": 9.814992433246858e-05,
199
+ "loss": 0.0662,
200
+ "step": 450
201
+ },
202
+ {
203
+ "epoch": 3.571428571428571,
204
+ "eval_loss": 0.17003558576107025,
205
+ "eval_runtime": 10.035,
206
+ "eval_samples_per_second": 4.185,
207
+ "eval_steps_per_second": 2.093,
208
+ "step": 450
209
+ },
210
+ {
211
+ "epoch": 3.7698412698412698,
212
+ "grad_norm": 0.31847792863845825,
213
+ "learning_rate": 9.765450470833865e-05,
214
+ "loss": 0.0724,
215
+ "step": 475
216
+ },
217
+ {
218
+ "epoch": 3.9682539682539684,
219
+ "grad_norm": 0.3440268635749817,
220
+ "learning_rate": 9.710194324414683e-05,
221
+ "loss": 0.0736,
222
+ "step": 500
223
+ },
224
+ {
225
+ "epoch": 3.9682539682539684,
226
+ "eval_loss": 0.1597495824098587,
227
+ "eval_runtime": 10.0596,
228
+ "eval_samples_per_second": 4.175,
229
+ "eval_steps_per_second": 2.088,
230
+ "step": 500
231
+ }
232
+ ],
233
+ "logging_steps": 25,
234
+ "max_steps": 2520,
235
+ "num_input_tokens_seen": 0,
236
+ "num_train_epochs": 20,
237
+ "save_steps": 50,
238
+ "stateful_callbacks": {
239
+ "TrainerControl": {
240
+ "args": {
241
+ "should_epoch_stop": false,
242
+ "should_evaluate": false,
243
+ "should_log": false,
244
+ "should_save": true,
245
+ "should_training_stop": false
246
+ },
247
+ "attributes": {}
248
+ }
249
+ },
250
+ "total_flos": 3.023524878274253e+17,
251
+ "train_batch_size": 2,
252
+ "trial_name": null,
253
+ "trial_params": null
254
+ }
checkpoints-ocrTaskJson/checkpoint-500/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6eded22602ef3539e4106094e170aeba16b5c8aaa33c2d9f4a1313e7fd5cf2d4
3
+ size 6289