BBehring commited on
Commit
b025dfd
·
verified ·
1 Parent(s): fdf9244

add lora_v21_seed_45 (V2.2 H100 canonical run)

Browse files
lora_v21_seed_45/README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: microsoft/deberta-v3-base
3
+ library_name: peft
4
+ tags:
5
+ - base_model:adapter:microsoft/deberta-v3-base
6
+ - lora
7
+ - transformers
8
+ ---
9
+
10
+ # Model Card for Model ID
11
+
12
+ <!-- Provide a quick summary of what the model is/does. -->
13
+
14
+
15
+
16
+ ## Model Details
17
+
18
+ ### Model Description
19
+
20
+ <!-- Provide a longer summary of what this model is. -->
21
+
22
+
23
+
24
+ - **Developed by:** [More Information Needed]
25
+ - **Funded by [optional]:** [More Information Needed]
26
+ - **Shared by [optional]:** [More Information Needed]
27
+ - **Model type:** [More Information Needed]
28
+ - **Language(s) (NLP):** [More Information Needed]
29
+ - **License:** [More Information Needed]
30
+ - **Finetuned from model [optional]:** [More Information Needed]
31
+
32
+ ### Model Sources [optional]
33
+
34
+ <!-- Provide the basic links for the model. -->
35
+
36
+ - **Repository:** [More Information Needed]
37
+ - **Paper [optional]:** [More Information Needed]
38
+ - **Demo [optional]:** [More Information Needed]
39
+
40
+ ## Uses
41
+
42
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
+
44
+ ### Direct Use
45
+
46
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
+
48
+ [More Information Needed]
49
+
50
+ ### Downstream Use [optional]
51
+
52
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
+
54
+ [More Information Needed]
55
+
56
+ ### Out-of-Scope Use
57
+
58
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
+
60
+ [More Information Needed]
61
+
62
+ ## Bias, Risks, and Limitations
63
+
64
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
+
66
+ [More Information Needed]
67
+
68
+ ### Recommendations
69
+
70
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
+
72
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
+
74
+ ## How to Get Started with the Model
75
+
76
+ Use the code below to get started with the model.
77
+
78
+ [More Information Needed]
79
+
80
+ ## Training Details
81
+
82
+ ### Training Data
83
+
84
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
+
86
+ [More Information Needed]
87
+
88
+ ### Training Procedure
89
+
90
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
+
92
+ #### Preprocessing [optional]
93
+
94
+ [More Information Needed]
95
+
96
+
97
+ #### Training Hyperparameters
98
+
99
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
+
101
+ #### Speeds, Sizes, Times [optional]
102
+
103
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
+
105
+ [More Information Needed]
106
+
107
+ ## Evaluation
108
+
109
+ <!-- This section describes the evaluation protocols and provides the results. -->
110
+
111
+ ### Testing Data, Factors & Metrics
112
+
113
+ #### Testing Data
114
+
115
+ <!-- This should link to a Dataset Card if possible. -->
116
+
117
+ [More Information Needed]
118
+
119
+ #### Factors
120
+
121
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
+
123
+ [More Information Needed]
124
+
125
+ #### Metrics
126
+
127
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
+
129
+ [More Information Needed]
130
+
131
+ ### Results
132
+
133
+ [More Information Needed]
134
+
135
+ #### Summary
136
+
137
+
138
+
139
+ ## Model Examination [optional]
140
+
141
+ <!-- Relevant interpretability work for the model goes here -->
142
+
143
+ [More Information Needed]
144
+
145
+ ## Environmental Impact
146
+
147
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
+
149
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
+
151
+ - **Hardware Type:** [More Information Needed]
152
+ - **Hours used:** [More Information Needed]
153
+ - **Cloud Provider:** [More Information Needed]
154
+ - **Compute Region:** [More Information Needed]
155
+ - **Carbon Emitted:** [More Information Needed]
156
+
157
+ ## Technical Specifications [optional]
158
+
159
+ ### Model Architecture and Objective
160
+
161
+ [More Information Needed]
162
+
163
+ ### Compute Infrastructure
164
+
165
+ [More Information Needed]
166
+
167
+ #### Hardware
168
+
169
+ [More Information Needed]
170
+
171
+ #### Software
172
+
173
+ [More Information Needed]
174
+
175
+ ## Citation [optional]
176
+
177
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
+
179
+ **BibTeX:**
180
+
181
+ [More Information Needed]
182
+
183
+ **APA:**
184
+
185
+ [More Information Needed]
186
+
187
+ ## Glossary [optional]
188
+
189
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
+
191
+ [More Information Needed]
192
+
193
+ ## More Information [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Authors [optional]
198
+
199
+ [More Information Needed]
200
+
201
+ ## Model Card Contact
202
+
203
+ [More Information Needed]
204
+ ### Framework versions
205
+
206
+ - PEFT 0.19.1
lora_v21_seed_45/adapter_config.json ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "microsoft/deberta-v3-base",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 16,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.1,
22
+ "lora_ga_config": null,
23
+ "megatron_config": null,
24
+ "megatron_core": "megatron.core",
25
+ "modules_to_save": [
26
+ "classifier",
27
+ "pooler",
28
+ "classifier",
29
+ "score"
30
+ ],
31
+ "peft_type": "LORA",
32
+ "peft_version": "0.19.1",
33
+ "qalora_group_size": 16,
34
+ "r": 8,
35
+ "rank_pattern": {},
36
+ "revision": null,
37
+ "target_modules": [
38
+ "value_proj",
39
+ "query_proj"
40
+ ],
41
+ "target_parameters": null,
42
+ "task_type": "SEQ_CLS",
43
+ "trainable_token_indices": null,
44
+ "use_bdlora": null,
45
+ "use_dora": false,
46
+ "use_qalora": false,
47
+ "use_rslora": false
48
+ }
lora_v21_seed_45/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdfc99586a2f997ce36d0a5492d28585b3a6d1e71829498ff1ff1b385db7a2fb
3
+ size 3555616
lora_v21_seed_45/added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[MASK]": 128000
3
+ }
lora_v21_seed_45/class_weights.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"class_0": 0.6787749528884888, "class_1": 1.8984063863754272}
lora_v21_seed_45/special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "[CLS]",
3
+ "cls_token": "[CLS]",
4
+ "eos_token": "[SEP]",
5
+ "mask_token": "[MASK]",
6
+ "pad_token": "[PAD]",
7
+ "sep_token": "[SEP]",
8
+ "unk_token": {
9
+ "content": "[UNK]",
10
+ "lstrip": false,
11
+ "normalized": true,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ }
15
+ }
lora_v21_seed_45/spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c679fbf93643d19aab7ee10c0b99e460bdbc02fedf34b92b05af343b4af586fd
3
+ size 2464616
lora_v21_seed_45/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
lora_v21_seed_45/tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[CLS]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[SEP]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[UNK]",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "128000": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "[CLS]",
45
+ "clean_up_tokenization_spaces": false,
46
+ "cls_token": "[CLS]",
47
+ "do_lower_case": false,
48
+ "eos_token": "[SEP]",
49
+ "extra_special_tokens": {},
50
+ "mask_token": "[MASK]",
51
+ "model_max_length": 1000000000000000019884624838656,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "sp_model_kwargs": {},
55
+ "split_by_punct": false,
56
+ "tokenizer_class": "DebertaV2Tokenizer",
57
+ "unk_token": "[UNK]",
58
+ "vocab_type": "spm"
59
+ }
lora_v21_seed_45/train_config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "base_model": "microsoft/deberta-v3-base",
3
+ "max_length": 512,
4
+ "lora_r": 8,
5
+ "lora_alpha": 16,
6
+ "lora_dropout": 0.1,
7
+ "lora_target_modules": [
8
+ "query_proj",
9
+ "value_proj"
10
+ ],
11
+ "lora_modules_to_save": [
12
+ "classifier",
13
+ "pooler"
14
+ ],
15
+ "learning_rate": 0.0002,
16
+ "weight_decay": 0.01,
17
+ "num_epochs": 8,
18
+ "batch_size": 8,
19
+ "grad_accumulation": 2,
20
+ "warmup_ratio": 0.06,
21
+ "fp16": true,
22
+ "gradient_checkpointing": true,
23
+ "class_weighted_loss": true,
24
+ "early_stopping_patience": 2
25
+ }
lora_v21_seed_45/train_history.json ADDED
@@ -0,0 +1,428 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "loss": 0.7008,
4
+ "grad_norm": 1.6835390329360962,
5
+ "learning_rate": 6.206896551724138e-05,
6
+ "epoch": 0.16666666666666666,
7
+ "step": 10
8
+ },
9
+ {
10
+ "loss": 0.6976,
11
+ "grad_norm": 1.1720629930496216,
12
+ "learning_rate": 0.00013103448275862068,
13
+ "epoch": 0.3333333333333333,
14
+ "step": 20
15
+ },
16
+ {
17
+ "loss": 0.6915,
18
+ "grad_norm": 1.8779666423797607,
19
+ "learning_rate": 0.0002,
20
+ "epoch": 0.5,
21
+ "step": 30
22
+ },
23
+ {
24
+ "loss": 0.681,
25
+ "grad_norm": 4.258940696716309,
26
+ "learning_rate": 0.00019556541019955653,
27
+ "epoch": 0.6666666666666666,
28
+ "step": 40
29
+ },
30
+ {
31
+ "loss": 0.637,
32
+ "grad_norm": 1.669708251953125,
33
+ "learning_rate": 0.00019113082039911309,
34
+ "epoch": 0.8333333333333334,
35
+ "step": 50
36
+ },
37
+ {
38
+ "loss": 0.5045,
39
+ "grad_norm": 2.4921090602874756,
40
+ "learning_rate": 0.00018669623059866964,
41
+ "epoch": 1.0,
42
+ "step": 60
43
+ },
44
+ {
45
+ "eval_loss": 0.38680821657180786,
46
+ "eval_pr_auc": 0.922113848519589,
47
+ "eval_runtime": 0.4804,
48
+ "eval_samples_per_second": 424.66,
49
+ "eval_steps_per_second": 54.123,
50
+ "epoch": 1.0,
51
+ "step": 60
52
+ },
53
+ {
54
+ "loss": 0.3147,
55
+ "grad_norm": 2.2998976707458496,
56
+ "learning_rate": 0.00018226164079822616,
57
+ "epoch": 1.1666666666666667,
58
+ "step": 70
59
+ },
60
+ {
61
+ "loss": 0.2106,
62
+ "grad_norm": 0.9478673934936523,
63
+ "learning_rate": 0.00017782705099778271,
64
+ "epoch": 1.3333333333333333,
65
+ "step": 80
66
+ },
67
+ {
68
+ "loss": 0.4153,
69
+ "grad_norm": 2.211520195007324,
70
+ "learning_rate": 0.00017339246119733924,
71
+ "epoch": 1.5,
72
+ "step": 90
73
+ },
74
+ {
75
+ "loss": 0.1793,
76
+ "grad_norm": 0.6265550851821899,
77
+ "learning_rate": 0.00016895787139689582,
78
+ "epoch": 1.6666666666666665,
79
+ "step": 100
80
+ },
81
+ {
82
+ "loss": 0.2251,
83
+ "grad_norm": 0.8771417140960693,
84
+ "learning_rate": 0.00016452328159645234,
85
+ "epoch": 1.8333333333333335,
86
+ "step": 110
87
+ },
88
+ {
89
+ "loss": 0.2688,
90
+ "grad_norm": 0.5613293647766113,
91
+ "learning_rate": 0.00016008869179600887,
92
+ "epoch": 2.0,
93
+ "step": 120
94
+ },
95
+ {
96
+ "eval_loss": 0.19772757589817047,
97
+ "eval_pr_auc": 0.9580288514660931,
98
+ "eval_runtime": 0.4007,
99
+ "eval_samples_per_second": 509.049,
100
+ "eval_steps_per_second": 64.879,
101
+ "epoch": 2.0,
102
+ "step": 120
103
+ },
104
+ {
105
+ "loss": 0.1171,
106
+ "grad_norm": 0.4529179036617279,
107
+ "learning_rate": 0.00015565410199556542,
108
+ "epoch": 2.1666666666666665,
109
+ "step": 130
110
+ },
111
+ {
112
+ "loss": 0.2049,
113
+ "grad_norm": 0.7560976147651672,
114
+ "learning_rate": 0.00015121951219512197,
115
+ "epoch": 2.3333333333333335,
116
+ "step": 140
117
+ },
118
+ {
119
+ "loss": 0.1763,
120
+ "grad_norm": 3.764814615249634,
121
+ "learning_rate": 0.0001467849223946785,
122
+ "epoch": 2.5,
123
+ "step": 150
124
+ },
125
+ {
126
+ "loss": 0.1502,
127
+ "grad_norm": 0.3239579498767853,
128
+ "learning_rate": 0.00014235033259423505,
129
+ "epoch": 2.6666666666666665,
130
+ "step": 160
131
+ },
132
+ {
133
+ "loss": 0.1717,
134
+ "grad_norm": 0.2796325385570526,
135
+ "learning_rate": 0.00013791574279379157,
136
+ "epoch": 2.8333333333333335,
137
+ "step": 170
138
+ },
139
+ {
140
+ "loss": 0.2059,
141
+ "grad_norm": 0.9186252355575562,
142
+ "learning_rate": 0.00013348115299334812,
143
+ "epoch": 3.0,
144
+ "step": 180
145
+ },
146
+ {
147
+ "eval_loss": 0.18136896193027496,
148
+ "eval_pr_auc": 0.9692257043567982,
149
+ "eval_runtime": 0.4033,
150
+ "eval_samples_per_second": 505.866,
151
+ "eval_steps_per_second": 64.473,
152
+ "epoch": 3.0,
153
+ "step": 180
154
+ },
155
+ {
156
+ "loss": 0.0671,
157
+ "grad_norm": 0.3994855284690857,
158
+ "learning_rate": 0.00012904656319290468,
159
+ "epoch": 3.1666666666666665,
160
+ "step": 190
161
+ },
162
+ {
163
+ "loss": 0.1077,
164
+ "grad_norm": 5.069406509399414,
165
+ "learning_rate": 0.0001246119733924612,
166
+ "epoch": 3.3333333333333335,
167
+ "step": 200
168
+ },
169
+ {
170
+ "loss": 0.163,
171
+ "grad_norm": 0.6471376419067383,
172
+ "learning_rate": 0.00012017738359201774,
173
+ "epoch": 3.5,
174
+ "step": 210
175
+ },
176
+ {
177
+ "loss": 0.1038,
178
+ "grad_norm": 0.20224113762378693,
179
+ "learning_rate": 0.00011574279379157429,
180
+ "epoch": 3.6666666666666665,
181
+ "step": 220
182
+ },
183
+ {
184
+ "loss": 0.121,
185
+ "grad_norm": 3.1991147994995117,
186
+ "learning_rate": 0.00011130820399113082,
187
+ "epoch": 3.8333333333333335,
188
+ "step": 230
189
+ },
190
+ {
191
+ "loss": 0.1008,
192
+ "grad_norm": 0.20365029573440552,
193
+ "learning_rate": 0.00010687361419068738,
194
+ "epoch": 4.0,
195
+ "step": 240
196
+ },
197
+ {
198
+ "eval_loss": 0.2102530300617218,
199
+ "eval_pr_auc": 0.9742538947286151,
200
+ "eval_runtime": 0.3965,
201
+ "eval_samples_per_second": 514.466,
202
+ "eval_steps_per_second": 65.569,
203
+ "epoch": 4.0,
204
+ "step": 240
205
+ },
206
+ {
207
+ "loss": 0.0438,
208
+ "grad_norm": 0.21605148911476135,
209
+ "learning_rate": 0.0001024390243902439,
210
+ "epoch": 4.166666666666667,
211
+ "step": 250
212
+ },
213
+ {
214
+ "loss": 0.0668,
215
+ "grad_norm": 0.0908743143081665,
216
+ "learning_rate": 9.800443458980046e-05,
217
+ "epoch": 4.333333333333333,
218
+ "step": 260
219
+ },
220
+ {
221
+ "loss": 0.0562,
222
+ "grad_norm": 3.6227610111236572,
223
+ "learning_rate": 9.356984478935698e-05,
224
+ "epoch": 4.5,
225
+ "step": 270
226
+ },
227
+ {
228
+ "loss": 0.139,
229
+ "grad_norm": 5.90009069442749,
230
+ "learning_rate": 8.913525498891354e-05,
231
+ "epoch": 4.666666666666667,
232
+ "step": 280
233
+ },
234
+ {
235
+ "loss": 0.1682,
236
+ "grad_norm": 0.26101890206336975,
237
+ "learning_rate": 8.470066518847007e-05,
238
+ "epoch": 4.833333333333333,
239
+ "step": 290
240
+ },
241
+ {
242
+ "loss": 0.0811,
243
+ "grad_norm": 0.08818994462490082,
244
+ "learning_rate": 8.026607538802661e-05,
245
+ "epoch": 5.0,
246
+ "step": 300
247
+ },
248
+ {
249
+ "eval_loss": 0.192447230219841,
250
+ "eval_pr_auc": 0.9751270173678214,
251
+ "eval_runtime": 0.4,
252
+ "eval_samples_per_second": 509.942,
253
+ "eval_steps_per_second": 64.993,
254
+ "epoch": 5.0,
255
+ "step": 300
256
+ },
257
+ {
258
+ "loss": 0.0439,
259
+ "grad_norm": 1.816786289215088,
260
+ "learning_rate": 7.583148558758315e-05,
261
+ "epoch": 5.166666666666667,
262
+ "step": 310
263
+ },
264
+ {
265
+ "loss": 0.0821,
266
+ "grad_norm": 0.1448965072631836,
267
+ "learning_rate": 7.139689578713969e-05,
268
+ "epoch": 5.333333333333333,
269
+ "step": 320
270
+ },
271
+ {
272
+ "loss": 0.0794,
273
+ "grad_norm": 0.19551332294940948,
274
+ "learning_rate": 6.696230598669624e-05,
275
+ "epoch": 5.5,
276
+ "step": 330
277
+ },
278
+ {
279
+ "loss": 0.0746,
280
+ "grad_norm": 0.09540294110774994,
281
+ "learning_rate": 6.252771618625277e-05,
282
+ "epoch": 5.666666666666667,
283
+ "step": 340
284
+ },
285
+ {
286
+ "loss": 0.1648,
287
+ "grad_norm": 0.11747663468122482,
288
+ "learning_rate": 5.809312638580932e-05,
289
+ "epoch": 5.833333333333333,
290
+ "step": 350
291
+ },
292
+ {
293
+ "loss": 0.0091,
294
+ "grad_norm": 1.0677231550216675,
295
+ "learning_rate": 5.365853658536586e-05,
296
+ "epoch": 6.0,
297
+ "step": 360
298
+ },
299
+ {
300
+ "eval_loss": 0.19510315358638763,
301
+ "eval_pr_auc": 0.9770272260812469,
302
+ "eval_runtime": 0.4071,
303
+ "eval_samples_per_second": 501.045,
304
+ "eval_steps_per_second": 63.859,
305
+ "epoch": 6.0,
306
+ "step": 360
307
+ },
308
+ {
309
+ "loss": 0.0846,
310
+ "grad_norm": 4.168880939483643,
311
+ "learning_rate": 4.92239467849224e-05,
312
+ "epoch": 6.166666666666667,
313
+ "step": 370
314
+ },
315
+ {
316
+ "loss": 0.0158,
317
+ "grad_norm": 0.09182658046483994,
318
+ "learning_rate": 4.478935698447894e-05,
319
+ "epoch": 6.333333333333333,
320
+ "step": 380
321
+ },
322
+ {
323
+ "loss": 0.0566,
324
+ "grad_norm": 1.1361926794052124,
325
+ "learning_rate": 4.035476718403548e-05,
326
+ "epoch": 6.5,
327
+ "step": 390
328
+ },
329
+ {
330
+ "loss": 0.1311,
331
+ "grad_norm": 3.7645211219787598,
332
+ "learning_rate": 3.5920177383592015e-05,
333
+ "epoch": 6.666666666666667,
334
+ "step": 400
335
+ },
336
+ {
337
+ "loss": 0.0141,
338
+ "grad_norm": 0.49836593866348267,
339
+ "learning_rate": 3.148558758314856e-05,
340
+ "epoch": 6.833333333333333,
341
+ "step": 410
342
+ },
343
+ {
344
+ "loss": 0.0381,
345
+ "grad_norm": 2.4308130741119385,
346
+ "learning_rate": 2.7050997782705102e-05,
347
+ "epoch": 7.0,
348
+ "step": 420
349
+ },
350
+ {
351
+ "eval_loss": 0.2070295661687851,
352
+ "eval_pr_auc": 0.9792026176606673,
353
+ "eval_runtime": 0.3966,
354
+ "eval_samples_per_second": 514.427,
355
+ "eval_steps_per_second": 65.564,
356
+ "epoch": 7.0,
357
+ "step": 420
358
+ },
359
+ {
360
+ "loss": 0.0194,
361
+ "grad_norm": 0.03615836799144745,
362
+ "learning_rate": 2.261640798226164e-05,
363
+ "epoch": 7.166666666666667,
364
+ "step": 430
365
+ },
366
+ {
367
+ "loss": 0.0575,
368
+ "grad_norm": 3.66750168800354,
369
+ "learning_rate": 1.8181818181818182e-05,
370
+ "epoch": 7.333333333333333,
371
+ "step": 440
372
+ },
373
+ {
374
+ "loss": 0.1944,
375
+ "grad_norm": 4.023026466369629,
376
+ "learning_rate": 1.3747228381374724e-05,
377
+ "epoch": 7.5,
378
+ "step": 450
379
+ },
380
+ {
381
+ "loss": 0.0305,
382
+ "grad_norm": 0.09852059185504913,
383
+ "learning_rate": 9.312638580931264e-06,
384
+ "epoch": 7.666666666666667,
385
+ "step": 460
386
+ },
387
+ {
388
+ "loss": 0.0189,
389
+ "grad_norm": 0.1663094162940979,
390
+ "learning_rate": 4.8780487804878055e-06,
391
+ "epoch": 7.833333333333333,
392
+ "step": 470
393
+ },
394
+ {
395
+ "loss": 0.0058,
396
+ "grad_norm": 0.06563691794872284,
397
+ "learning_rate": 4.434589800443459e-07,
398
+ "epoch": 8.0,
399
+ "step": 480
400
+ },
401
+ {
402
+ "eval_loss": 0.2041657716035843,
403
+ "eval_pr_auc": 0.9786156897267255,
404
+ "eval_runtime": 0.4007,
405
+ "eval_samples_per_second": 509.098,
406
+ "eval_steps_per_second": 64.885,
407
+ "epoch": 8.0,
408
+ "step": 480
409
+ },
410
+ {
411
+ "train_runtime": 61.9504,
412
+ "train_samples_per_second": 123.066,
413
+ "train_steps_per_second": 7.748,
414
+ "total_flos": 383113929070272.0,
415
+ "train_loss": 0.18523927949524174,
416
+ "epoch": 8.0,
417
+ "step": 480
418
+ },
419
+ {
420
+ "eval_loss": 0.2070295661687851,
421
+ "eval_pr_auc": 0.9792026176606673,
422
+ "eval_runtime": 0.4108,
423
+ "eval_samples_per_second": 496.556,
424
+ "eval_steps_per_second": 63.287,
425
+ "epoch": 8.0,
426
+ "step": 480
427
+ }
428
+ ]