ranamhamoud commited on
Commit
ab95f7a
·
verified ·
1 Parent(s): ae3729c

Upload 11 files

Browse files
README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-7B-v0.1
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:mistralai/Mistral-7B-v0.1
7
+ - lora
8
+ - sft
9
+ - transformers
10
+ - trl
11
+ ---
12
+
13
+ # Model Card for Model ID
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+
19
+ ## Model Details
20
+
21
+ ### Model Description
22
+
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+
26
+
27
+ - **Developed by:** [More Information Needed]
28
+ - **Funded by [optional]:** [More Information Needed]
29
+ - **Shared by [optional]:** [More Information Needed]
30
+ - **Model type:** [More Information Needed]
31
+ - **Language(s) (NLP):** [More Information Needed]
32
+ - **License:** [More Information Needed]
33
+ - **Finetuned from model [optional]:** [More Information Needed]
34
+
35
+ ### Model Sources [optional]
36
+
37
+ <!-- Provide the basic links for the model. -->
38
+
39
+ - **Repository:** [More Information Needed]
40
+ - **Paper [optional]:** [More Information Needed]
41
+ - **Demo [optional]:** [More Information Needed]
42
+
43
+ ## Uses
44
+
45
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
+
47
+ ### Direct Use
48
+
49
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
+
51
+ [More Information Needed]
52
+
53
+ ### Downstream Use [optional]
54
+
55
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
+
57
+ [More Information Needed]
58
+
59
+ ### Out-of-Scope Use
60
+
61
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
+
63
+ [More Information Needed]
64
+
65
+ ## Bias, Risks, and Limitations
66
+
67
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
+
69
+ [More Information Needed]
70
+
71
+ ### Recommendations
72
+
73
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
74
+
75
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
76
+
77
+ ## How to Get Started with the Model
78
+
79
+ Use the code below to get started with the model.
80
+
81
+ [More Information Needed]
82
+
83
+ ## Training Details
84
+
85
+ ### Training Data
86
+
87
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
88
+
89
+ [More Information Needed]
90
+
91
+ ### Training Procedure
92
+
93
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
94
+
95
+ #### Preprocessing [optional]
96
+
97
+ [More Information Needed]
98
+
99
+
100
+ #### Training Hyperparameters
101
+
102
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
103
+
104
+ #### Speeds, Sizes, Times [optional]
105
+
106
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
107
+
108
+ [More Information Needed]
109
+
110
+ ## Evaluation
111
+
112
+ <!-- This section describes the evaluation protocols and provides the results. -->
113
+
114
+ ### Testing Data, Factors & Metrics
115
+
116
+ #### Testing Data
117
+
118
+ <!-- This should link to a Dataset Card if possible. -->
119
+
120
+ [More Information Needed]
121
+
122
+ #### Factors
123
+
124
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
125
+
126
+ [More Information Needed]
127
+
128
+ #### Metrics
129
+
130
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
131
+
132
+ [More Information Needed]
133
+
134
+ ### Results
135
+
136
+ [More Information Needed]
137
+
138
+ #### Summary
139
+
140
+
141
+
142
+ ## Model Examination [optional]
143
+
144
+ <!-- Relevant interpretability work for the model goes here -->
145
+
146
+ [More Information Needed]
147
+
148
+ ## Environmental Impact
149
+
150
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
151
+
152
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
153
+
154
+ - **Hardware Type:** [More Information Needed]
155
+ - **Hours used:** [More Information Needed]
156
+ - **Cloud Provider:** [More Information Needed]
157
+ - **Compute Region:** [More Information Needed]
158
+ - **Carbon Emitted:** [More Information Needed]
159
+
160
+ ## Technical Specifications [optional]
161
+
162
+ ### Model Architecture and Objective
163
+
164
+ [More Information Needed]
165
+
166
+ ### Compute Infrastructure
167
+
168
+ [More Information Needed]
169
+
170
+ #### Hardware
171
+
172
+ [More Information Needed]
173
+
174
+ #### Software
175
+
176
+ [More Information Needed]
177
+
178
+ ## Citation [optional]
179
+
180
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
181
+
182
+ **BibTeX:**
183
+
184
+ [More Information Needed]
185
+
186
+ **APA:**
187
+
188
+ [More Information Needed]
189
+
190
+ ## Glossary [optional]
191
+
192
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
193
+
194
+ [More Information Needed]
195
+
196
+ ## More Information [optional]
197
+
198
+ [More Information Needed]
199
+
200
+ ## Model Card Authors [optional]
201
+
202
+ [More Information Needed]
203
+
204
+ ## Model Card Contact
205
+
206
+ [More Information Needed]
207
+ ### Framework versions
208
+
209
+ - PEFT 0.18.1
adapter_config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "mistralai/Mistral-7B-v0.1",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 16,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.1,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": null,
25
+ "peft_type": "LORA",
26
+ "peft_version": "0.18.1",
27
+ "qalora_group_size": 16,
28
+ "r": 32,
29
+ "rank_pattern": {},
30
+ "revision": null,
31
+ "target_modules": [
32
+ "v_proj",
33
+ "q_proj"
34
+ ],
35
+ "target_parameters": null,
36
+ "task_type": "CAUSAL_LM",
37
+ "trainable_token_indices": null,
38
+ "use_dora": false,
39
+ "use_qalora": false,
40
+ "use_rslora": false
41
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45a1406b146fbb26c8e7e69a970142356ff2bbf979551ab848ccb2bdb6d6ab5a
3
+ size 551634040
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a16342584d17efac5145c50d9165bab082754d70c4cd11f889d638874f06af9
3
+ size 109130618
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d09ce2b6e37a9d9c8c7d4e22f7a724d231f482b130612876f5faa0f15ebb3de9
3
+ size 14244
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a9b3bb4e458b1d27f96bde8ed8a0ace6ffbcd98ee8a8b1000c43f4118d5a30e
3
+ size 1064
special_tokens_map.json ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<|think|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ {
11
+ "content": "<|/think|>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ },
17
+ {
18
+ "content": "<|story|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ {
25
+ "content": "<|/story|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ ],
32
+ "bos_token": {
33
+ "content": "<s>",
34
+ "lstrip": false,
35
+ "normalized": false,
36
+ "rstrip": false,
37
+ "single_word": false
38
+ },
39
+ "eos_token": {
40
+ "content": "</s>",
41
+ "lstrip": false,
42
+ "normalized": false,
43
+ "rstrip": false,
44
+ "single_word": false
45
+ },
46
+ "pad_token": "</s>",
47
+ "unk_token": {
48
+ "content": "<unk>",
49
+ "lstrip": false,
50
+ "normalized": false,
51
+ "rstrip": false,
52
+ "single_word": false
53
+ }
54
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": null,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ },
30
+ "32000": {
31
+ "content": "<|think|>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ },
38
+ "32001": {
39
+ "content": "<|/think|>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false,
44
+ "special": true
45
+ },
46
+ "32002": {
47
+ "content": "<|story|>",
48
+ "lstrip": false,
49
+ "normalized": false,
50
+ "rstrip": false,
51
+ "single_word": false,
52
+ "special": true
53
+ },
54
+ "32003": {
55
+ "content": "<|/story|>",
56
+ "lstrip": false,
57
+ "normalized": false,
58
+ "rstrip": false,
59
+ "single_word": false,
60
+ "special": true
61
+ }
62
+ },
63
+ "additional_special_tokens": [
64
+ "<|think|>",
65
+ "<|/think|>",
66
+ "<|story|>",
67
+ "<|/story|>"
68
+ ],
69
+ "bos_token": "<s>",
70
+ "clean_up_tokenization_spaces": false,
71
+ "eos_token": "</s>",
72
+ "extra_special_tokens": {},
73
+ "legacy": false,
74
+ "model_max_length": 1000000000000000019884624838656,
75
+ "pad_token": "</s>",
76
+ "sp_model_kwargs": {},
77
+ "spaces_between_special_tokens": false,
78
+ "tokenizer_class": "LlamaTokenizer",
79
+ "unk_token": "<unk>",
80
+ "use_default_system_prompt": false
81
+ }
trainer_state.json ADDED
@@ -0,0 +1,594 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 4.992414100847836,
6
+ "eval_steps": 500,
7
+ "global_step": 2800,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "entropy": 1.6259768772125245,
14
+ "epoch": 0.0892458723784025,
15
+ "grad_norm": 0.9921875,
16
+ "learning_rate": 9.702970297029703e-05,
17
+ "loss": 1.5974,
18
+ "mean_token_accuracy": 0.6319892483949662,
19
+ "num_tokens": 204800.0,
20
+ "step": 50
21
+ },
22
+ {
23
+ "entropy": 1.1296514749526978,
24
+ "epoch": 0.178491744756805,
25
+ "grad_norm": 0.7734375,
26
+ "learning_rate": 0.00019603960396039606,
27
+ "loss": 1.095,
28
+ "mean_token_accuracy": 0.7227028369903564,
29
+ "num_tokens": 409600.0,
30
+ "step": 100
31
+ },
32
+ {
33
+ "entropy": 1.0354267311096192,
34
+ "epoch": 0.2677376171352075,
35
+ "grad_norm": 0.640625,
36
+ "learning_rate": 0.0001998929698787665,
37
+ "loss": 1.0001,
38
+ "mean_token_accuracy": 0.738949174284935,
39
+ "num_tokens": 614400.0,
40
+ "step": 150
41
+ },
42
+ {
43
+ "entropy": 0.9883857735991478,
44
+ "epoch": 0.35698348951361,
45
+ "grad_norm": 0.609375,
46
+ "learning_rate": 0.00019955410749920795,
47
+ "loss": 0.9576,
48
+ "mean_token_accuracy": 0.7486803528666496,
49
+ "num_tokens": 819200.0,
50
+ "step": 200
51
+ },
52
+ {
53
+ "entropy": 0.9801795700192452,
54
+ "epoch": 0.4462293618920125,
55
+ "grad_norm": 0.54296875,
56
+ "learning_rate": 0.00019898401407651969,
57
+ "loss": 0.9485,
58
+ "mean_token_accuracy": 0.7502834811806679,
59
+ "num_tokens": 1024000.0,
60
+ "step": 250
61
+ },
62
+ {
63
+ "entropy": 0.976861891746521,
64
+ "epoch": 0.535475234270415,
65
+ "grad_norm": 0.58984375,
66
+ "learning_rate": 0.00019818401374789826,
67
+ "loss": 0.942,
68
+ "mean_token_accuracy": 0.7504301071166992,
69
+ "num_tokens": 1228800.0,
70
+ "step": 300
71
+ },
72
+ {
73
+ "entropy": 0.9461924117803574,
74
+ "epoch": 0.6247211066488175,
75
+ "grad_norm": 0.53125,
76
+ "learning_rate": 0.00019715596464773042,
77
+ "loss": 0.9121,
78
+ "mean_token_accuracy": 0.7569012671709061,
79
+ "num_tokens": 1433600.0,
80
+ "step": 350
81
+ },
82
+ {
83
+ "entropy": 0.9457061448693276,
84
+ "epoch": 0.71396697902722,
85
+ "grad_norm": 0.57421875,
86
+ "learning_rate": 0.00019590225459176582,
87
+ "loss": 0.9115,
88
+ "mean_token_accuracy": 0.7561534664034844,
89
+ "num_tokens": 1638400.0,
90
+ "step": 400
91
+ },
92
+ {
93
+ "entropy": 0.9444361236691475,
94
+ "epoch": 0.8032128514056225,
95
+ "grad_norm": 0.5625,
96
+ "learning_rate": 0.00019442579553101584,
97
+ "loss": 0.9109,
98
+ "mean_token_accuracy": 0.7562658813595772,
99
+ "num_tokens": 1843200.0,
100
+ "step": 450
101
+ },
102
+ {
103
+ "entropy": 0.9286397603154183,
104
+ "epoch": 0.892458723784025,
105
+ "grad_norm": 0.60546875,
106
+ "learning_rate": 0.00019273001678826114,
107
+ "loss": 0.8951,
108
+ "mean_token_accuracy": 0.7600635370612144,
109
+ "num_tokens": 2048000.0,
110
+ "step": 500
111
+ },
112
+ {
113
+ "entropy": 0.9270413678884506,
114
+ "epoch": 0.9817045961624274,
115
+ "grad_norm": 0.55859375,
116
+ "learning_rate": 0.00019081885709287667,
117
+ "loss": 0.8924,
118
+ "mean_token_accuracy": 0.7599315717816353,
119
+ "num_tokens": 2252800.0,
120
+ "step": 550
121
+ },
122
+ {
123
+ "entropy": 0.871931555307456,
124
+ "epoch": 1.069611780455154,
125
+ "grad_norm": 0.5390625,
126
+ "learning_rate": 0.00018869675543247482,
127
+ "loss": 0.8402,
128
+ "mean_token_accuracy": 0.7711419060750662,
129
+ "num_tokens": 2454528.0,
130
+ "step": 600
131
+ },
132
+ {
133
+ "entropy": 0.866090109050274,
134
+ "epoch": 1.1588576528335564,
135
+ "grad_norm": 0.640625,
136
+ "learning_rate": 0.00018636864074261523,
137
+ "loss": 0.8309,
138
+ "mean_token_accuracy": 0.7731769263744355,
139
+ "num_tokens": 2659328.0,
140
+ "step": 650
141
+ },
142
+ {
143
+ "entropy": 0.8680649262666702,
144
+ "epoch": 1.248103525211959,
145
+ "grad_norm": 0.6015625,
146
+ "learning_rate": 0.00018383992045852872,
147
+ "loss": 0.8363,
148
+ "mean_token_accuracy": 0.7720087966322899,
149
+ "num_tokens": 2864128.0,
150
+ "step": 700
151
+ },
152
+ {
153
+ "entropy": 0.8708215129375457,
154
+ "epoch": 1.3373493975903614,
155
+ "grad_norm": 0.578125,
156
+ "learning_rate": 0.0001811164679554457,
157
+ "loss": 0.8353,
158
+ "mean_token_accuracy": 0.7726930573582649,
159
+ "num_tokens": 3068928.0,
160
+ "step": 750
161
+ },
162
+ {
163
+ "entropy": 0.8597633948922158,
164
+ "epoch": 1.426595269968764,
165
+ "grad_norm": 0.5703125,
166
+ "learning_rate": 0.0001782046089067012,
167
+ "loss": 0.8265,
168
+ "mean_token_accuracy": 0.7739687168598175,
169
+ "num_tokens": 3273728.0,
170
+ "step": 800
171
+ },
172
+ {
173
+ "entropy": 0.85432891279459,
174
+ "epoch": 1.5158411423471665,
175
+ "grad_norm": 0.59375,
176
+ "learning_rate": 0.000175111106591302,
177
+ "loss": 0.8231,
178
+ "mean_token_accuracy": 0.7762707683444023,
179
+ "num_tokens": 3478528.0,
180
+ "step": 850
181
+ },
182
+ {
183
+ "entropy": 0.8619935244321824,
184
+ "epoch": 1.605087014725569,
185
+ "grad_norm": 0.61328125,
186
+ "learning_rate": 0.00017184314618508148,
187
+ "loss": 0.8254,
188
+ "mean_token_accuracy": 0.7741544449329376,
189
+ "num_tokens": 3683328.0,
190
+ "step": 900
191
+ },
192
+ {
193
+ "entropy": 0.8650275564193726,
194
+ "epoch": 1.6943328871039713,
195
+ "grad_norm": 0.58984375,
196
+ "learning_rate": 0.00016840831807192854,
197
+ "loss": 0.8275,
198
+ "mean_token_accuracy": 0.7748044946789742,
199
+ "num_tokens": 3888128.0,
200
+ "step": 950
201
+ },
202
+ {
203
+ "entropy": 0.860476321876049,
204
+ "epoch": 1.783578759482374,
205
+ "grad_norm": 0.57421875,
206
+ "learning_rate": 0.00016481460021385323,
207
+ "loss": 0.8255,
208
+ "mean_token_accuracy": 0.7732942277193069,
209
+ "num_tokens": 4092928.0,
210
+ "step": 1000
211
+ },
212
+ {
213
+ "entropy": 0.8491663599014282,
214
+ "epoch": 1.8728246318607764,
215
+ "grad_norm": 0.57421875,
216
+ "learning_rate": 0.0001610703396208375,
217
+ "loss": 0.8172,
218
+ "mean_token_accuracy": 0.7759139758348464,
219
+ "num_tokens": 4297728.0,
220
+ "step": 1050
221
+ },
222
+ {
223
+ "entropy": 0.8503634676337242,
224
+ "epoch": 1.962070504239179,
225
+ "grad_norm": 0.58203125,
226
+ "learning_rate": 0.0001571842329635102,
227
+ "loss": 0.8165,
228
+ "mean_token_accuracy": 0.7770869982242584,
229
+ "num_tokens": 4502528.0,
230
+ "step": 1100
231
+ },
232
+ {
233
+ "entropy": 0.7979691500591143,
234
+ "epoch": 2.0499776885319054,
235
+ "grad_norm": 0.625,
236
+ "learning_rate": 0.00015316530637367708,
237
+ "loss": 0.7612,
238
+ "mean_token_accuracy": 0.7890002040693602,
239
+ "num_tokens": 4704256.0,
240
+ "step": 1150
241
+ },
242
+ {
243
+ "entropy": 0.7788776361942291,
244
+ "epoch": 2.139223560910308,
245
+ "grad_norm": 0.62890625,
246
+ "learning_rate": 0.00014902289447962187,
247
+ "loss": 0.7431,
248
+ "mean_token_accuracy": 0.7926588499546051,
249
+ "num_tokens": 4909056.0,
250
+ "step": 1200
251
+ },
252
+ {
253
+ "entropy": 0.7778185418248177,
254
+ "epoch": 2.2284694332887103,
255
+ "grad_norm": 0.71484375,
256
+ "learning_rate": 0.0001447666187248731,
257
+ "loss": 0.7431,
258
+ "mean_token_accuracy": 0.7919403752684593,
259
+ "num_tokens": 5113856.0,
260
+ "step": 1250
261
+ },
262
+ {
263
+ "entropy": 0.7770297473669052,
264
+ "epoch": 2.3177153056671127,
265
+ "grad_norm": 0.671875,
266
+ "learning_rate": 0.00014040636502079434,
267
+ "loss": 0.7421,
268
+ "mean_token_accuracy": 0.7937096789479255,
269
+ "num_tokens": 5318656.0,
270
+ "step": 1300
271
+ },
272
+ {
273
+ "entropy": 0.774446559548378,
274
+ "epoch": 2.4069611780455156,
275
+ "grad_norm": 0.69921875,
276
+ "learning_rate": 0.00013595226078490395,
277
+ "loss": 0.7405,
278
+ "mean_token_accuracy": 0.7925708714127541,
279
+ "num_tokens": 5523456.0,
280
+ "step": 1350
281
+ },
282
+ {
283
+ "entropy": 0.7771885851025582,
284
+ "epoch": 2.496207050423918,
285
+ "grad_norm": 0.61328125,
286
+ "learning_rate": 0.00013141465141825603,
287
+ "loss": 0.7402,
288
+ "mean_token_accuracy": 0.7931915977597237,
289
+ "num_tokens": 5728256.0,
290
+ "step": 1400
291
+ },
292
+ {
293
+ "entropy": 0.7688560289144516,
294
+ "epoch": 2.5854529228023204,
295
+ "grad_norm": 0.66796875,
296
+ "learning_rate": 0.0001268040762765189,
297
+ "loss": 0.7369,
298
+ "mean_token_accuracy": 0.7952443835139275,
299
+ "num_tokens": 5933056.0,
300
+ "step": 1450
301
+ },
302
+ {
303
+ "entropy": 0.78252092897892,
304
+ "epoch": 2.674698795180723,
305
+ "grad_norm": 0.70703125,
306
+ "learning_rate": 0.00012213124419056074,
307
+ "loss": 0.7474,
308
+ "mean_token_accuracy": 0.7915493679046631,
309
+ "num_tokens": 6137856.0,
310
+ "step": 1500
311
+ },
312
+ {
313
+ "entropy": 0.7763542786240578,
314
+ "epoch": 2.7639446675591253,
315
+ "grad_norm": 0.625,
316
+ "learning_rate": 0.00011740700859340161,
317
+ "loss": 0.7383,
318
+ "mean_token_accuracy": 0.7942082145810128,
319
+ "num_tokens": 6342656.0,
320
+ "step": 1550
321
+ },
322
+ {
323
+ "entropy": 0.7749826022982598,
324
+ "epoch": 2.853190539937528,
325
+ "grad_norm": 0.6796875,
326
+ "learning_rate": 0.00011264234231130209,
327
+ "loss": 0.7394,
328
+ "mean_token_accuracy": 0.794452593922615,
329
+ "num_tokens": 6547456.0,
330
+ "step": 1600
331
+ },
332
+ {
333
+ "entropy": 0.7712367391586303,
334
+ "epoch": 2.9424364123159306,
335
+ "grad_norm": 0.65234375,
336
+ "learning_rate": 0.00010784831207754171,
337
+ "loss": 0.7352,
338
+ "mean_token_accuracy": 0.793088955283165,
339
+ "num_tokens": 6752256.0,
340
+ "step": 1650
341
+ },
342
+ {
343
+ "entropy": 0.7491341354278138,
344
+ "epoch": 3.030343596608657,
345
+ "grad_norm": 0.71875,
346
+ "learning_rate": 0.00010303605282808242,
347
+ "loss": 0.7141,
348
+ "mean_token_accuracy": 0.7993410486860324,
349
+ "num_tokens": 6953984.0,
350
+ "step": 1700
351
+ },
352
+ {
353
+ "entropy": 0.7142950230836869,
354
+ "epoch": 3.1195894689870594,
355
+ "grad_norm": 0.66796875,
356
+ "learning_rate": 9.821674183881982e-05,
357
+ "loss": 0.6733,
358
+ "mean_token_accuracy": 0.809511242210865,
359
+ "num_tokens": 7158784.0,
360
+ "step": 1750
361
+ },
362
+ {
363
+ "entropy": 0.7075088465213776,
364
+ "epoch": 3.208835341365462,
365
+ "grad_norm": 0.734375,
366
+ "learning_rate": 9.34015727644931e-05,
367
+ "loss": 0.6705,
368
+ "mean_token_accuracy": 0.80923753708601,
369
+ "num_tokens": 7363584.0,
370
+ "step": 1800
371
+ },
372
+ {
373
+ "entropy": 0.7165163627266884,
374
+ "epoch": 3.298081213743864,
375
+ "grad_norm": 0.734375,
376
+ "learning_rate": 8.860172963955215e-05,
377
+ "loss": 0.683,
378
+ "mean_token_accuracy": 0.8069452607631683,
379
+ "num_tokens": 7568384.0,
380
+ "step": 1850
381
+ },
382
+ {
383
+ "entropy": 0.7122274199128151,
384
+ "epoch": 3.3873270861222666,
385
+ "grad_norm": 0.75390625,
386
+ "learning_rate": 8.382836090136962e-05,
387
+ "loss": 0.6751,
388
+ "mean_token_accuracy": 0.8079178902506828,
389
+ "num_tokens": 7773184.0,
390
+ "step": 1900
391
+ },
392
+ {
393
+ "entropy": 0.7034698343276977,
394
+ "epoch": 3.4765729585006695,
395
+ "grad_norm": 0.6953125,
396
+ "learning_rate": 7.909255349613283e-05,
397
+ "loss": 0.6673,
398
+ "mean_token_accuracy": 0.8113440865278244,
399
+ "num_tokens": 7977984.0,
400
+ "step": 1950
401
+ },
402
+ {
403
+ "entropy": 0.7057863634824753,
404
+ "epoch": 3.565818830879072,
405
+ "grad_norm": 0.76171875,
406
+ "learning_rate": 7.440530712755951e-05,
407
+ "loss": 0.6688,
408
+ "mean_token_accuracy": 0.8105962842702865,
409
+ "num_tokens": 8182784.0,
410
+ "step": 2000
411
+ },
412
+ {
413
+ "entropy": 0.7125656777620315,
414
+ "epoch": 3.6550647032574743,
415
+ "grad_norm": 0.76953125,
416
+ "learning_rate": 6.977750870824863e-05,
417
+ "loss": 0.6761,
418
+ "mean_token_accuracy": 0.8088660803437233,
419
+ "num_tokens": 8387584.0,
420
+ "step": 2050
421
+ },
422
+ {
423
+ "entropy": 0.70076868891716,
424
+ "epoch": 3.7443105756358768,
425
+ "grad_norm": 0.765625,
426
+ "learning_rate": 6.521990707300736e-05,
427
+ "loss": 0.6634,
428
+ "mean_token_accuracy": 0.8110703819990158,
429
+ "num_tokens": 8592384.0,
430
+ "step": 2100
431
+ },
432
+ {
433
+ "entropy": 0.700417303442955,
434
+ "epoch": 3.833556448014279,
435
+ "grad_norm": 0.73828125,
436
+ "learning_rate": 6.074308801288713e-05,
437
+ "loss": 0.6631,
438
+ "mean_token_accuracy": 0.8109530797600746,
439
+ "num_tokens": 8797184.0,
440
+ "step": 2150
441
+ },
442
+ {
443
+ "entropy": 0.7015222778916359,
444
+ "epoch": 3.922802320392682,
445
+ "grad_norm": 0.77734375,
446
+ "learning_rate": 5.6357449687915386e-05,
447
+ "loss": 0.6665,
448
+ "mean_token_accuracy": 0.8110703811049461,
449
+ "num_tokens": 9001984.0,
450
+ "step": 2200
451
+ },
452
+ {
453
+ "entropy": 0.7122828648780203,
454
+ "epoch": 4.010709504685408,
455
+ "grad_norm": 0.703125,
456
+ "learning_rate": 5.207317847563248e-05,
457
+ "loss": 0.6758,
458
+ "mean_token_accuracy": 0.8095082153523634,
459
+ "num_tokens": 9203712.0,
460
+ "step": 2250
461
+ },
462
+ {
463
+ "entropy": 0.6674378645420075,
464
+ "epoch": 4.099955377063811,
465
+ "grad_norm": 0.74609375,
466
+ "learning_rate": 4.7900225311528094e-05,
467
+ "loss": 0.6269,
468
+ "mean_token_accuracy": 0.8208357748389244,
469
+ "num_tokens": 9408512.0,
470
+ "step": 2300
471
+ },
472
+ {
473
+ "entropy": 0.6730345389246941,
474
+ "epoch": 4.189201249442213,
475
+ "grad_norm": 0.75390625,
476
+ "learning_rate": 4.384828257633177e-05,
477
+ "loss": 0.6365,
478
+ "mean_token_accuracy": 0.8188856270909309,
479
+ "num_tokens": 9613312.0,
480
+ "step": 2350
481
+ },
482
+ {
483
+ "entropy": 0.6662819012999535,
484
+ "epoch": 4.278447121820616,
485
+ "grad_norm": 0.8515625,
486
+ "learning_rate": 3.992676158383957e-05,
487
+ "loss": 0.6271,
488
+ "mean_token_accuracy": 0.8203372398018837,
489
+ "num_tokens": 9818112.0,
490
+ "step": 2400
491
+ },
492
+ {
493
+ "entropy": 0.664862583577633,
494
+ "epoch": 4.367692994199018,
495
+ "grad_norm": 0.80078125,
496
+ "learning_rate": 3.6144770721565844e-05,
497
+ "loss": 0.6261,
498
+ "mean_token_accuracy": 0.8207477974891663,
499
+ "num_tokens": 10022912.0,
500
+ "step": 2450
501
+ },
502
+ {
503
+ "entropy": 0.6621513772010803,
504
+ "epoch": 4.4569388665774206,
505
+ "grad_norm": 0.77734375,
506
+ "learning_rate": 3.251109429499194e-05,
507
+ "loss": 0.6238,
508
+ "mean_token_accuracy": 0.8212072286009788,
509
+ "num_tokens": 10227712.0,
510
+ "step": 2500
511
+ },
512
+ {
513
+ "entropy": 0.6667330291867256,
514
+ "epoch": 4.546184738955823,
515
+ "grad_norm": 0.77734375,
516
+ "learning_rate": 2.9034172124549263e-05,
517
+ "loss": 0.6275,
518
+ "mean_token_accuracy": 0.8213147559762001,
519
+ "num_tokens": 10432512.0,
520
+ "step": 2550
521
+ },
522
+ {
523
+ "entropy": 0.6676543334126472,
524
+ "epoch": 4.635430611334225,
525
+ "grad_norm": 0.78125,
526
+ "learning_rate": 2.5722079942726964e-05,
527
+ "loss": 0.6295,
528
+ "mean_token_accuracy": 0.8205034193396569,
529
+ "num_tokens": 10637312.0,
530
+ "step": 2600
531
+ },
532
+ {
533
+ "entropy": 0.6684669059514999,
534
+ "epoch": 4.724676483712628,
535
+ "grad_norm": 0.796875,
536
+ "learning_rate": 2.2582510636834064e-05,
537
+ "loss": 0.6347,
538
+ "mean_token_accuracy": 0.8190029296278953,
539
+ "num_tokens": 10842112.0,
540
+ "step": 2650
541
+ },
542
+ {
543
+ "entropy": 0.6723845577239991,
544
+ "epoch": 4.813922356091031,
545
+ "grad_norm": 0.8125,
546
+ "learning_rate": 1.9622756380983887e-05,
547
+ "loss": 0.6353,
548
+ "mean_token_accuracy": 0.8182209166884422,
549
+ "num_tokens": 11046912.0,
550
+ "step": 2700
551
+ },
552
+ {
553
+ "entropy": 0.6706709080934524,
554
+ "epoch": 4.903168228469434,
555
+ "grad_norm": 0.75390625,
556
+ "learning_rate": 1.684969169880165e-05,
557
+ "loss": 0.6291,
558
+ "mean_token_accuracy": 0.8204301059246063,
559
+ "num_tokens": 11251712.0,
560
+ "step": 2750
561
+ },
562
+ {
563
+ "entropy": 0.6609847331047058,
564
+ "epoch": 4.992414100847836,
565
+ "grad_norm": 0.7734375,
566
+ "learning_rate": 1.4269757496194991e-05,
567
+ "loss": 0.6283,
568
+ "mean_token_accuracy": 0.8204398784041405,
569
+ "num_tokens": 11456512.0,
570
+ "step": 2800
571
+ }
572
+ ],
573
+ "logging_steps": 50,
574
+ "max_steps": 3360,
575
+ "num_input_tokens_seen": 0,
576
+ "num_train_epochs": 6,
577
+ "save_steps": 560,
578
+ "stateful_callbacks": {
579
+ "TrainerControl": {
580
+ "args": {
581
+ "should_epoch_stop": false,
582
+ "should_evaluate": false,
583
+ "should_log": false,
584
+ "should_save": true,
585
+ "should_training_stop": false
586
+ },
587
+ "attributes": {}
588
+ }
589
+ },
590
+ "total_flos": 4.897183183625257e+17,
591
+ "train_batch_size": 1,
592
+ "trial_name": null,
593
+ "trial_params": null
594
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5aef2ef2aa5e0492e3c844c5919533b1d735a9a2341f5ae411c844a57e691a3b
3
+ size 5816