avik1108 commited on
Commit
c8ebbc9
Β·
verified Β·
1 Parent(s): b3ed2f0

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: codellama/CodeLlama-34b-instruct-hf
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.14.1.dev0
adapter_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "codellama/CodeLlama-34b-instruct-hf",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 32,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.05,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "r": 16,
24
+ "rank_pattern": {},
25
+ "revision": null,
26
+ "target_modules": [
27
+ "q_proj",
28
+ "v_proj"
29
+ ],
30
+ "task_type": "CAUSAL_LM",
31
+ "use_dora": false,
32
+ "use_rslora": false
33
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6df2baf97303611204a130ed5d7b31d7e3cdba8acf7c412897edaac2ef8abdce
3
+ size 2176149792
added_tokens.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "[PAD]": 32004,
3
+ "▁<EOT>": 32003,
4
+ "▁<MID>": 32001,
5
+ "▁<PRE>": 32000,
6
+ "▁<SUF>": 32002
7
+ }
checkpoint-900/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: codellama/CodeLlama-7b-instruct-hf
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.14.1.dev0
checkpoint-900/adapter_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "codellama/CodeLlama-7b-instruct-hf",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 32,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.05,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "r": 16,
24
+ "rank_pattern": {},
25
+ "revision": null,
26
+ "target_modules": [
27
+ "v_proj",
28
+ "q_proj"
29
+ ],
30
+ "task_type": "CAUSAL_LM",
31
+ "use_dora": false,
32
+ "use_rslora": false
33
+ }
checkpoint-900/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a745f1ca945cb6e37ff54dd1bc71bb586f98180437da9fc2fd39370022ae8d97
3
+ size 1082705504
checkpoint-900/added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[PAD]": 32016
3
+ }
checkpoint-900/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70e7aa586d9e83b4a8d2746d5d44b99a93026fe7ad2ae083c8b21423f1f982e0
3
+ size 67185338
checkpoint-900/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea03b5738d0704e5b4a9141881fe737c7bca538cde377266ee0d0b58560d42f7
3
+ size 14244
checkpoint-900/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8241b28596309b6efdc6344cd50ddd93f72d5ff3654ee306ee63a9192d368a28
3
+ size 1064
checkpoint-900/special_tokens_map.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "▁<PRE>",
4
+ "▁<MID>",
5
+ "▁<SUF>",
6
+ "▁<EOT>"
7
+ ],
8
+ "bos_token": {
9
+ "content": "<s>",
10
+ "lstrip": false,
11
+ "normalized": false,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ },
15
+ "eos_token": {
16
+ "content": "</s>",
17
+ "lstrip": false,
18
+ "normalized": false,
19
+ "rstrip": false,
20
+ "single_word": false
21
+ },
22
+ "pad_token": {
23
+ "content": "[PAD]",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false
28
+ },
29
+ "unk_token": {
30
+ "content": "<unk>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ }
36
+ }
checkpoint-900/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45ccb9c8b6b561889acea59191d66986d314e7cbd6a78abc6e49b139ca91c1e6
3
+ size 500058
checkpoint-900/tokenizer_config.json ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "32007": {
30
+ "content": "▁<PRE>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "32008": {
38
+ "content": "▁<SUF>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "32009": {
46
+ "content": "▁<MID>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "32010": {
54
+ "content": "▁<EOT>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "32016": {
62
+ "content": "[PAD]",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ }
69
+ },
70
+ "additional_special_tokens": [
71
+ "▁<PRE>",
72
+ "▁<MID>",
73
+ "▁<SUF>",
74
+ "▁<EOT>"
75
+ ],
76
+ "bos_token": "<s>",
77
+ "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set content = '<<SYS>>\\n' + system_message + '\\n<</SYS>>\\n\\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ bos_token + '[INST] ' + content | trim + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ ' ' + content | trim + ' ' + eos_token }}{% endif %}{% endfor %}",
78
+ "clean_up_tokenization_spaces": false,
79
+ "eos_token": "</s>",
80
+ "eot_token": "▁<EOT>",
81
+ "extra_special_tokens": {},
82
+ "fill_token": "<FILL_ME>",
83
+ "legacy": null,
84
+ "middle_token": "▁<MID>",
85
+ "model_max_length": 4096,
86
+ "pad_token": "[PAD]",
87
+ "padding_side": "right",
88
+ "prefix_token": "▁<PRE>",
89
+ "sp_model_kwargs": {},
90
+ "suffix_first": false,
91
+ "suffix_token": "▁<SUF>",
92
+ "tokenizer_class": "CodeLlamaTokenizer",
93
+ "unk_token": "<unk>",
94
+ "use_default_system_prompt": false
95
+ }
checkpoint-900/trainer_state.json ADDED
@@ -0,0 +1,663 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 9.89875173370319,
5
+ "eval_steps": 500,
6
+ "global_step": 900,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.11095700416088766,
13
+ "grad_norm": 0.31662145256996155,
14
+ "learning_rate": 7.407407407407407e-05,
15
+ "loss": 0.5604,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.22191400832177532,
20
+ "grad_norm": 0.38665255904197693,
21
+ "learning_rate": 0.00014814814814814815,
22
+ "loss": 0.3448,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.332871012482663,
27
+ "grad_norm": 0.38282278180122375,
28
+ "learning_rate": 0.00019999417253661235,
29
+ "loss": 0.1345,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.44382801664355065,
34
+ "grad_norm": 0.33959391713142395,
35
+ "learning_rate": 0.000199890592080658,
36
+ "loss": 0.1206,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 0.5547850208044383,
41
+ "grad_norm": 0.2943621873855591,
42
+ "learning_rate": 0.00019965766682369186,
43
+ "loss": 0.1234,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 0.665742024965326,
48
+ "grad_norm": 0.25359126925468445,
49
+ "learning_rate": 0.00019929569837240564,
50
+ "loss": 0.1039,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 0.7766990291262136,
55
+ "grad_norm": 0.23930878937244415,
56
+ "learning_rate": 0.0001988051554269675,
57
+ "loss": 0.102,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 0.8876560332871013,
62
+ "grad_norm": 0.2013150006532669,
63
+ "learning_rate": 0.00019818667317411865,
64
+ "loss": 0.0974,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 0.9986130374479889,
69
+ "grad_norm": 0.25096118450164795,
70
+ "learning_rate": 0.00019744105246469263,
71
+ "loss": 0.099,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 1.0998613037447988,
76
+ "grad_norm": 0.25178226828575134,
77
+ "learning_rate": 0.0001965692587766216,
78
+ "loss": 0.0714,
79
+ "step": 100
80
+ },
81
+ {
82
+ "epoch": 1.2108183079056865,
83
+ "grad_norm": 0.2704208195209503,
84
+ "learning_rate": 0.00019557242096477327,
85
+ "loss": 0.0771,
86
+ "step": 110
87
+ },
88
+ {
89
+ "epoch": 1.3217753120665743,
90
+ "grad_norm": 0.22107760608196259,
91
+ "learning_rate": 0.00019445182979923654,
92
+ "loss": 0.0703,
93
+ "step": 120
94
+ },
95
+ {
96
+ "epoch": 1.4327323162274619,
97
+ "grad_norm": 0.26953792572021484,
98
+ "learning_rate": 0.00019320893629394873,
99
+ "loss": 0.0753,
100
+ "step": 130
101
+ },
102
+ {
103
+ "epoch": 1.5436893203883495,
104
+ "grad_norm": 0.2142401486635208,
105
+ "learning_rate": 0.00019184534982782904,
106
+ "loss": 0.0724,
107
+ "step": 140
108
+ },
109
+ {
110
+ "epoch": 1.6546463245492373,
111
+ "grad_norm": 0.25699618458747864,
112
+ "learning_rate": 0.00019036283606085053,
113
+ "loss": 0.0648,
114
+ "step": 150
115
+ },
116
+ {
117
+ "epoch": 1.765603328710125,
118
+ "grad_norm": 0.2224379926919937,
119
+ "learning_rate": 0.00018876331464774945,
120
+ "loss": 0.0706,
121
+ "step": 160
122
+ },
123
+ {
124
+ "epoch": 1.8765603328710125,
125
+ "grad_norm": 0.23435620963573456,
126
+ "learning_rate": 0.0001870488567523318,
127
+ "loss": 0.0695,
128
+ "step": 170
129
+ },
130
+ {
131
+ "epoch": 1.9875173370319001,
132
+ "grad_norm": 0.18676415085792542,
133
+ "learning_rate": 0.00018522168236559695,
134
+ "loss": 0.0615,
135
+ "step": 180
136
+ },
137
+ {
138
+ "epoch": 2.08876560332871,
139
+ "grad_norm": 0.24162153899669647,
140
+ "learning_rate": 0.00018328415743114912,
141
+ "loss": 0.0445,
142
+ "step": 190
143
+ },
144
+ {
145
+ "epoch": 2.1997226074895977,
146
+ "grad_norm": 0.3869277536869049,
147
+ "learning_rate": 0.00018123879078162097,
148
+ "loss": 0.0502,
149
+ "step": 200
150
+ },
151
+ {
152
+ "epoch": 2.3106796116504853,
153
+ "grad_norm": 0.3037394881248474,
154
+ "learning_rate": 0.00017908823089007457,
155
+ "loss": 0.0482,
156
+ "step": 210
157
+ },
158
+ {
159
+ "epoch": 2.421636615811373,
160
+ "grad_norm": 0.18976379930973053,
161
+ "learning_rate": 0.00017683526244058716,
162
+ "loss": 0.0528,
163
+ "step": 220
164
+ },
165
+ {
166
+ "epoch": 2.5325936199722605,
167
+ "grad_norm": 0.30705705285072327,
168
+ "learning_rate": 0.00017448280272246212,
169
+ "loss": 0.0521,
170
+ "step": 230
171
+ },
172
+ {
173
+ "epoch": 2.6435506241331486,
174
+ "grad_norm": 0.21610881388187408,
175
+ "learning_rate": 0.000172033897852734,
176
+ "loss": 0.0535,
177
+ "step": 240
178
+ },
179
+ {
180
+ "epoch": 2.754507628294036,
181
+ "grad_norm": 0.18693220615386963,
182
+ "learning_rate": 0.00016949171883185918,
183
+ "loss": 0.0517,
184
+ "step": 250
185
+ },
186
+ {
187
+ "epoch": 2.8654646324549238,
188
+ "grad_norm": 0.3321268558502197,
189
+ "learning_rate": 0.0001668595574376992,
190
+ "loss": 0.0407,
191
+ "step": 260
192
+ },
193
+ {
194
+ "epoch": 2.9764216366158114,
195
+ "grad_norm": 0.20721495151519775,
196
+ "learning_rate": 0.000164140821963114,
197
+ "loss": 0.0417,
198
+ "step": 270
199
+ },
200
+ {
201
+ "epoch": 3.0776699029126213,
202
+ "grad_norm": 0.20151656866073608,
203
+ "learning_rate": 0.00016133903280268362,
204
+ "loss": 0.0373,
205
+ "step": 280
206
+ },
207
+ {
208
+ "epoch": 3.188626907073509,
209
+ "grad_norm": 0.3590203821659088,
210
+ "learning_rate": 0.00015845781789427377,
211
+ "loss": 0.0358,
212
+ "step": 290
213
+ },
214
+ {
215
+ "epoch": 3.2995839112343965,
216
+ "grad_norm": 0.20630675554275513,
217
+ "learning_rate": 0.000155500908021347,
218
+ "loss": 0.0299,
219
+ "step": 300
220
+ },
221
+ {
222
+ "epoch": 3.410540915395284,
223
+ "grad_norm": 0.3287246525287628,
224
+ "learning_rate": 0.000152472131982103,
225
+ "loss": 0.0331,
226
+ "step": 310
227
+ },
228
+ {
229
+ "epoch": 3.5214979195561718,
230
+ "grad_norm": 0.24394913017749786,
231
+ "learning_rate": 0.0001493754116317029,
232
+ "loss": 0.0368,
233
+ "step": 320
234
+ },
235
+ {
236
+ "epoch": 3.63245492371706,
237
+ "grad_norm": 0.20165830850601196,
238
+ "learning_rate": 0.0001462147568039977,
239
+ "loss": 0.0336,
240
+ "step": 330
241
+ },
242
+ {
243
+ "epoch": 3.7434119278779474,
244
+ "grad_norm": 0.2538021504878998,
245
+ "learning_rate": 0.00014299426011933568,
246
+ "loss": 0.0295,
247
+ "step": 340
248
+ },
249
+ {
250
+ "epoch": 3.854368932038835,
251
+ "grad_norm": 0.36229604482650757,
252
+ "learning_rate": 0.00013971809168517298,
253
+ "loss": 0.0358,
254
+ "step": 350
255
+ },
256
+ {
257
+ "epoch": 3.9653259361997226,
258
+ "grad_norm": 0.4092184603214264,
259
+ "learning_rate": 0.00013639049369634876,
260
+ "loss": 0.034,
261
+ "step": 360
262
+ },
263
+ {
264
+ "epoch": 4.066574202496533,
265
+ "grad_norm": 0.11960680782794952,
266
+ "learning_rate": 0.00013301577494201664,
267
+ "loss": 0.0233,
268
+ "step": 370
269
+ },
270
+ {
271
+ "epoch": 4.17753120665742,
272
+ "grad_norm": 0.26415354013442993,
273
+ "learning_rate": 0.00012959830522634596,
274
+ "loss": 0.02,
275
+ "step": 380
276
+ },
277
+ {
278
+ "epoch": 4.288488210818308,
279
+ "grad_norm": 0.21966516971588135,
280
+ "learning_rate": 0.00012614250971021657,
281
+ "loss": 0.0225,
282
+ "step": 390
283
+ },
284
+ {
285
+ "epoch": 4.399445214979195,
286
+ "grad_norm": 0.2905697524547577,
287
+ "learning_rate": 0.00012265286318123415,
288
+ "loss": 0.0244,
289
+ "step": 400
290
+ },
291
+ {
292
+ "epoch": 4.510402219140083,
293
+ "grad_norm": 0.24163606762886047,
294
+ "learning_rate": 0.00011913388425948584,
295
+ "loss": 0.017,
296
+ "step": 410
297
+ },
298
+ {
299
+ "epoch": 4.621359223300971,
300
+ "grad_norm": 0.40009695291519165,
301
+ "learning_rate": 0.00011559012954653865,
302
+ "loss": 0.0219,
303
+ "step": 420
304
+ },
305
+ {
306
+ "epoch": 4.732316227461858,
307
+ "grad_norm": 0.1963382512331009,
308
+ "learning_rate": 0.0001120261877252568,
309
+ "loss": 0.0179,
310
+ "step": 430
311
+ },
312
+ {
313
+ "epoch": 4.843273231622746,
314
+ "grad_norm": 0.33989155292510986,
315
+ "learning_rate": 0.00010844667361807842,
316
+ "loss": 0.0198,
317
+ "step": 440
318
+ },
319
+ {
320
+ "epoch": 4.954230235783633,
321
+ "grad_norm": 0.38484710454940796,
322
+ "learning_rate": 0.00010485622221144484,
323
+ "loss": 0.0249,
324
+ "step": 450
325
+ },
326
+ {
327
+ "epoch": 5.055478502080444,
328
+ "grad_norm": 0.18945415318012238,
329
+ "learning_rate": 0.00010125948265412033,
330
+ "loss": 0.0177,
331
+ "step": 460
332
+ },
333
+ {
334
+ "epoch": 5.166435506241331,
335
+ "grad_norm": 0.25906893610954285,
336
+ "learning_rate": 9.766111223717352e-05,
337
+ "loss": 0.0127,
338
+ "step": 470
339
+ },
340
+ {
341
+ "epoch": 5.277392510402219,
342
+ "grad_norm": 0.23804187774658203,
343
+ "learning_rate": 9.406577036341548e-05,
344
+ "loss": 0.0128,
345
+ "step": 480
346
+ },
347
+ {
348
+ "epoch": 5.388349514563107,
349
+ "grad_norm": 0.20456787943840027,
350
+ "learning_rate": 9.047811251410376e-05,
351
+ "loss": 0.0111,
352
+ "step": 490
353
+ },
354
+ {
355
+ "epoch": 5.499306518723994,
356
+ "grad_norm": 0.15757159888744354,
357
+ "learning_rate": 8.690278422072384e-05,
358
+ "loss": 0.0101,
359
+ "step": 500
360
+ },
361
+ {
362
+ "epoch": 5.610263522884882,
363
+ "grad_norm": 0.16691505908966064,
364
+ "learning_rate": 8.334441504965455e-05,
365
+ "loss": 0.0115,
366
+ "step": 510
367
+ },
368
+ {
369
+ "epoch": 5.721220527045769,
370
+ "grad_norm": 0.5055399537086487,
371
+ "learning_rate": 7.980761260750607e-05,
372
+ "loss": 0.0088,
373
+ "step": 520
374
+ },
375
+ {
376
+ "epoch": 5.832177531206657,
377
+ "grad_norm": 0.15076065063476562,
378
+ "learning_rate": 7.629695657489257e-05,
379
+ "loss": 0.0117,
380
+ "step": 530
381
+ },
382
+ {
383
+ "epoch": 5.943134535367545,
384
+ "grad_norm": 0.09655993431806564,
385
+ "learning_rate": 7.281699277636572e-05,
386
+ "loss": 0.0111,
387
+ "step": 540
388
+ },
389
+ {
390
+ "epoch": 6.044382801664355,
391
+ "grad_norm": 0.4866645336151123,
392
+ "learning_rate": 6.93722272941869e-05,
393
+ "loss": 0.0092,
394
+ "step": 550
395
+ },
396
+ {
397
+ "epoch": 6.155339805825243,
398
+ "grad_norm": 0.1816895604133606,
399
+ "learning_rate": 6.59671206335602e-05,
400
+ "loss": 0.0082,
401
+ "step": 560
402
+ },
403
+ {
404
+ "epoch": 6.26629680998613,
405
+ "grad_norm": 0.22271257638931274,
406
+ "learning_rate": 6.260608194688206e-05,
407
+ "loss": 0.0046,
408
+ "step": 570
409
+ },
410
+ {
411
+ "epoch": 6.377253814147018,
412
+ "grad_norm": 0.06787201762199402,
413
+ "learning_rate": 5.929346332448511e-05,
414
+ "loss": 0.0051,
415
+ "step": 580
416
+ },
417
+ {
418
+ "epoch": 6.4882108183079055,
419
+ "grad_norm": 0.09298055619001389,
420
+ "learning_rate": 5.6033554159270294e-05,
421
+ "loss": 0.0054,
422
+ "step": 590
423
+ },
424
+ {
425
+ "epoch": 6.599167822468793,
426
+ "grad_norm": 0.03731105476617813,
427
+ "learning_rate": 5.283057559252341e-05,
428
+ "loss": 0.0053,
429
+ "step": 600
430
+ },
431
+ {
432
+ "epoch": 6.710124826629681,
433
+ "grad_norm": 0.10652171820402145,
434
+ "learning_rate": 4.96886750481082e-05,
435
+ "loss": 0.0057,
436
+ "step": 610
437
+ },
438
+ {
439
+ "epoch": 6.821081830790568,
440
+ "grad_norm": 0.2607424259185791,
441
+ "learning_rate": 4.661192086211366e-05,
442
+ "loss": 0.0077,
443
+ "step": 620
444
+ },
445
+ {
446
+ "epoch": 6.932038834951456,
447
+ "grad_norm": 0.11328639835119247,
448
+ "learning_rate": 4.360429701490934e-05,
449
+ "loss": 0.0073,
450
+ "step": 630
451
+ },
452
+ {
453
+ "epoch": 7.033287101248266,
454
+ "grad_norm": 0.0941685363650322,
455
+ "learning_rate": 4.06696979724298e-05,
456
+ "loss": 0.0039,
457
+ "step": 640
458
+ },
459
+ {
460
+ "epoch": 7.144244105409154,
461
+ "grad_norm": 0.45776239037513733,
462
+ "learning_rate": 3.7811923643367974e-05,
463
+ "loss": 0.0032,
464
+ "step": 650
465
+ },
466
+ {
467
+ "epoch": 7.2552011095700415,
468
+ "grad_norm": 0.08863729238510132,
469
+ "learning_rate": 3.503467445880789e-05,
470
+ "loss": 0.0026,
471
+ "step": 660
472
+ },
473
+ {
474
+ "epoch": 7.366158113730929,
475
+ "grad_norm": 0.04661976918578148,
476
+ "learning_rate": 3.2341546580666796e-05,
477
+ "loss": 0.0024,
478
+ "step": 670
479
+ },
480
+ {
481
+ "epoch": 7.477115117891817,
482
+ "grad_norm": 0.08003357797861099,
483
+ "learning_rate": 2.9736027245152275e-05,
484
+ "loss": 0.0022,
485
+ "step": 680
486
+ },
487
+ {
488
+ "epoch": 7.588072122052704,
489
+ "grad_norm": 0.15967042744159698,
490
+ "learning_rate": 2.722149024726307e-05,
491
+ "loss": 0.0024,
492
+ "step": 690
493
+ },
494
+ {
495
+ "epoch": 7.699029126213592,
496
+ "grad_norm": 0.0572751984000206,
497
+ "learning_rate": 2.480119157218108e-05,
498
+ "loss": 0.003,
499
+ "step": 700
500
+ },
501
+ {
502
+ "epoch": 7.8099861303744795,
503
+ "grad_norm": 0.0780700072646141,
504
+ "learning_rate": 2.247826517921121e-05,
505
+ "loss": 0.0035,
506
+ "step": 710
507
+ },
508
+ {
509
+ "epoch": 7.920943134535367,
510
+ "grad_norm": 0.19474399089813232,
511
+ "learning_rate": 2.025571894372794e-05,
512
+ "loss": 0.0027,
513
+ "step": 720
514
+ },
515
+ {
516
+ "epoch": 8.022191400832178,
517
+ "grad_norm": 0.12848657369613647,
518
+ "learning_rate": 1.813643076238375e-05,
519
+ "loss": 0.002,
520
+ "step": 730
521
+ },
522
+ {
523
+ "epoch": 8.133148404993065,
524
+ "grad_norm": 0.05772533640265465,
525
+ "learning_rate": 1.6123144826622504e-05,
526
+ "loss": 0.0017,
527
+ "step": 740
528
+ },
529
+ {
530
+ "epoch": 8.244105409153953,
531
+ "grad_norm": 0.14121367037296295,
532
+ "learning_rate": 1.4218468069322578e-05,
533
+ "loss": 0.0013,
534
+ "step": 750
535
+ },
536
+ {
537
+ "epoch": 8.35506241331484,
538
+ "grad_norm": 0.14342299103736877,
539
+ "learning_rate": 1.2424866789171729e-05,
540
+ "loss": 0.0016,
541
+ "step": 760
542
+ },
543
+ {
544
+ "epoch": 8.466019417475728,
545
+ "grad_norm": 0.03438349440693855,
546
+ "learning_rate": 1.0744663457143878e-05,
547
+ "loss": 0.0011,
548
+ "step": 770
549
+ },
550
+ {
551
+ "epoch": 8.576976421636616,
552
+ "grad_norm": 0.0756613090634346,
553
+ "learning_rate": 9.180033709213454e-06,
554
+ "loss": 0.0017,
555
+ "step": 780
556
+ },
557
+ {
558
+ "epoch": 8.687933425797503,
559
+ "grad_norm": 0.0464102178812027,
560
+ "learning_rate": 7.733003529201278e-06,
561
+ "loss": 0.0014,
562
+ "step": 790
563
+ },
564
+ {
565
+ "epoch": 8.79889042995839,
566
+ "grad_norm": 0.12452979385852814,
567
+ "learning_rate": 6.405446625399481e-06,
568
+ "loss": 0.0015,
569
+ "step": 800
570
+ },
571
+ {
572
+ "epoch": 8.909847434119278,
573
+ "grad_norm": 0.08071909099817276,
574
+ "learning_rate": 5.199082004372957e-06,
575
+ "loss": 0.0014,
576
+ "step": 810
577
+ },
578
+ {
579
+ "epoch": 9.011095700416089,
580
+ "grad_norm": 0.06948132812976837,
581
+ "learning_rate": 4.115471745078314e-06,
582
+ "loss": 0.0012,
583
+ "step": 820
584
+ },
585
+ {
586
+ "epoch": 9.122052704576976,
587
+ "grad_norm": 0.07605510950088501,
588
+ "learning_rate": 3.1560189761830728e-06,
589
+ "loss": 0.0009,
590
+ "step": 830
591
+ },
592
+ {
593
+ "epoch": 9.233009708737864,
594
+ "grad_norm": 0.0312280785292387,
595
+ "learning_rate": 2.3219660592038285e-06,
596
+ "loss": 0.0012,
597
+ "step": 840
598
+ },
599
+ {
600
+ "epoch": 9.343966712898752,
601
+ "grad_norm": 0.02329327166080475,
602
+ "learning_rate": 1.6143929798162704e-06,
603
+ "loss": 0.001,
604
+ "step": 850
605
+ },
606
+ {
607
+ "epoch": 9.45492371705964,
608
+ "grad_norm": 0.08054498583078384,
609
+ "learning_rate": 1.034215949419748e-06,
610
+ "loss": 0.0012,
611
+ "step": 860
612
+ },
613
+ {
614
+ "epoch": 9.565880721220527,
615
+ "grad_norm": 0.09850303828716278,
616
+ "learning_rate": 5.821862187675775e-07,
617
+ "loss": 0.0011,
618
+ "step": 870
619
+ },
620
+ {
621
+ "epoch": 9.676837725381414,
622
+ "grad_norm": 0.08373916149139404,
623
+ "learning_rate": 2.588891051988895e-07,
624
+ "loss": 0.0019,
625
+ "step": 880
626
+ },
627
+ {
628
+ "epoch": 9.787794729542302,
629
+ "grad_norm": 0.017217393964529037,
630
+ "learning_rate": 6.474323473194543e-08,
631
+ "loss": 0.0009,
632
+ "step": 890
633
+ },
634
+ {
635
+ "epoch": 9.89875173370319,
636
+ "grad_norm": 0.04848321154713631,
637
+ "learning_rate": 0.0,
638
+ "loss": 0.0009,
639
+ "step": 900
640
+ }
641
+ ],
642
+ "logging_steps": 10,
643
+ "max_steps": 900,
644
+ "num_input_tokens_seen": 0,
645
+ "num_train_epochs": 10,
646
+ "save_steps": 500,
647
+ "stateful_callbacks": {
648
+ "TrainerControl": {
649
+ "args": {
650
+ "should_epoch_stop": false,
651
+ "should_evaluate": false,
652
+ "should_log": false,
653
+ "should_save": true,
654
+ "should_training_stop": true
655
+ },
656
+ "attributes": {}
657
+ }
658
+ },
659
+ "total_flos": 9.301284175906406e+16,
660
+ "train_batch_size": 1,
661
+ "trial_name": null,
662
+ "trial_params": null
663
+ }
checkpoint-900/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdeac0fb37eaa91455cecc123679fa4e4438fdb27ac5176fc55a9a6450b9ae55
3
+ size 5432
runs/Dec28_05-44-47_0039b5b9221b/events.out.tfevents.1735364795.0039b5b9221b.1852.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22b1bb3a78c68a8e8ee588f10de3ffb7445f71d0a64f192568f0b40d096e5560
3
+ size 5276
runs/Dec28_08-24-06_1354036abf15/events.out.tfevents.1735374356.1354036abf15.1636.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5330ae7517d78fbc16cf76d7f651a743ebb00fa7cc5d1ac1c0dc992049590c2
3
+ size 11722
runs/Dec28_12-48-07_fd0e334a1436/events.out.tfevents.1735390245.fd0e334a1436.519.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60d130b94395e4d6dbd18f9f3a94c15006a2678f3a2421081e9066b38158e2c7
3
+ size 25027
runs/Jan22_16-15-05_cdc5c1d0bc0b/events.out.tfevents.1737565545.cdc5c1d0bc0b.885.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54ea3402e2577e75c628729a718c96f1490e2170908b8137b08a5a6462c760a0
3
+ size 7594
runs/Jan22_18-00-05_cdc5c1d0bc0b/events.out.tfevents.1737569278.cdc5c1d0bc0b.33121.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c64d37d46dcfd061e4f62bda9773ba34970443dc59c86a169422928a9ce794a1
3
+ size 12646
runs/Jan23_04-47-05_1ad434f892e7/events.out.tfevents.1737608653.1ad434f892e7.1069.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5011947f1e52d172f9f21ea19051b72e20978323ee548741e4863e7906054873
3
+ size 11733
special_tokens_map.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "▁<PRE>",
4
+ "▁<MID>",
5
+ "▁<SUF>",
6
+ "▁<EOT>"
7
+ ],
8
+ "bos_token": {
9
+ "content": "<s>",
10
+ "lstrip": false,
11
+ "normalized": false,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ },
15
+ "eos_token": {
16
+ "content": "</s>",
17
+ "lstrip": false,
18
+ "normalized": false,
19
+ "rstrip": false,
20
+ "single_word": false
21
+ },
22
+ "pad_token": {
23
+ "content": "[PAD]",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false
28
+ },
29
+ "unk_token": {
30
+ "content": "<unk>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ }
36
+ }
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "32000": {
30
+ "content": "▁<PRE>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "32001": {
38
+ "content": "▁<MID>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "32002": {
46
+ "content": "▁<SUF>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "32003": {
54
+ "content": "▁<EOT>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "32004": {
62
+ "content": "[PAD]",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ }
69
+ },
70
+ "additional_special_tokens": [
71
+ "▁<PRE>",
72
+ "▁<MID>",
73
+ "▁<SUF>",
74
+ "▁<EOT>"
75
+ ],
76
+ "bos_token": "<s>",
77
+ "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set content = '<<SYS>>\\n' + system_message + '\\n<</SYS>>\\n\\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ bos_token + '[INST] ' + content | trim + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ ' ' + content | trim + ' ' + eos_token }}{% endif %}{% endfor %}",
78
+ "clean_up_tokenization_spaces": false,
79
+ "eos_token": "</s>",
80
+ "eot_token": "▁<EOT>",
81
+ "extra_special_tokens": {},
82
+ "fill_token": "<FILL_ME>",
83
+ "legacy": null,
84
+ "middle_token": "▁<MID>",
85
+ "model_max_length": 4096,
86
+ "pad_token": "[PAD]",
87
+ "padding_side": "right",
88
+ "prefix_token": "▁<PRE>",
89
+ "sp_model_kwargs": {},
90
+ "suffix_first": false,
91
+ "suffix_token": "▁<SUF>",
92
+ "tokenizer_class": "CodeLlamaTokenizer",
93
+ "unk_token": "<unk>",
94
+ "use_default_system_prompt": false
95
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f98408be396298e0c3f260988a6947831bd1dce848adb105edfc40caea6c77a
3
+ size 5432