RichardErkhov commited on
Commit
13210fd
·
verified ·
1 Parent(s): 834044c

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +320 -0
README.md ADDED
@@ -0,0 +1,320 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ granite-3b-code-instruct-2k - EXL2
11
+ - Model creator: https://huggingface.co/ibm-granite/
12
+ - Original model: https://huggingface.co/ibm-granite/granite-3b-code-instruct-2k/
13
+
14
+
15
+ ## Available sizes
16
+
17
+ | Branch | Bits | Description |
18
+ | ----- | ---- | ------------ |
19
+ | [8_0](https://huggingface.co/ibm-granite_-_granite-3b-code-instruct-2k-exl2/tree/8_0) | 8.0 | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
20
+ | [6_5](https://huggingface.co/ibm-granite_-_granite-3b-code-instruct-2k-exl2/tree/6_5) | 6.5 | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
21
+ | [5_0](https://huggingface.co/ibm-granite_-_granite-3b-code-instruct-2k-exl2/tree/5_0) | 5.0 | Slightly lower quality vs 6.5, but usable |
22
+ | [4_25](https://huggingface.co/ibm-granite_-_granite-3b-code-instruct-2k-exl2/tree/4_25) | 4.25 | GPTQ equivalent bits per weight, slightly higher quality. |
23
+ | [3_5](https://huggingface.co/ibm-granite_-_granite-3b-code-instruct-2k-exl2/tree/3_5) | 3.5 | Lower quality, only use if you have to. |
24
+
25
+
26
+ ## Download instructions
27
+ With git:
28
+ ```shell
29
+ git clone --single-branch --branch 6_5 https://huggingface.co/ibm-granite_-_granite-3b-code-instruct-2k-exl2 granite-3b-code-instruct-2k-6_5
30
+ ```
31
+ With huggingface hub:
32
+ ```shell
33
+ pip3 install huggingface-hub
34
+ ```
35
+ To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
36
+ Linux:
37
+ ```shell
38
+ huggingface-cli download ibm-granite_-_granite-3b-code-instruct-2k-exl2 --revision 6_5 --local-dir granite-3b-code-instruct-2k-6_5 --local-dir-use-symlinks False
39
+ ```
40
+ Windows (which apparently doesn't like _ in folders sometimes?):
41
+
42
+ ```shell
43
+ huggingface-cli download ibm-granite_-_granite-3b-code-instruct-2k-exl2 --revision 6_5 --local-dir granite-3b-code-instruct-2k-6.5 --local-dir-use-symlinks False
44
+ ```
45
+
46
+
47
+
48
+
49
+ Original model description:
50
+ ---
51
+ pipeline_tag: text-generation
52
+ base_model: ibm-granite/granite-3b-code-base-2k
53
+ inference: false
54
+ license: apache-2.0
55
+ datasets:
56
+ - bigcode/commitpackft
57
+ - TIGER-Lab/MathInstruct
58
+ - meta-math/MetaMathQA
59
+ - glaiveai/glaive-code-assistant-v3
60
+ - glaive-function-calling-v2
61
+ - bugdaryan/sql-create-context-instruction
62
+ - garage-bAInd/Open-Platypus
63
+ - nvidia/HelpSteer
64
+ metrics:
65
+ - code_eval
66
+ library_name: transformers
67
+ tags:
68
+ - code
69
+ - granite
70
+ model-index:
71
+ - name: granite-3b-code-instruct
72
+ results:
73
+ - task:
74
+ type: text-generation
75
+ dataset:
76
+ type: bigcode/humanevalpack
77
+ name: HumanEvalSynthesis(Python)
78
+ metrics:
79
+ - name: pass@1
80
+ type: pass@1
81
+ value: 51.2
82
+ veriefied: false
83
+ - task:
84
+ type: text-generation
85
+ dataset:
86
+ type: bigcode/humanevalpack
87
+ name: HumanEvalSynthesis(JavaScript)
88
+ metrics:
89
+ - name: pass@1
90
+ type: pass@1
91
+ value: 43.9
92
+ veriefied: false
93
+ - task:
94
+ type: text-generation
95
+ dataset:
96
+ type: bigcode/humanevalpack
97
+ name: HumanEvalSynthesis(Java)
98
+ metrics:
99
+ - name: pass@1
100
+ type: pass@1
101
+ value: 41.5
102
+ veriefied: false
103
+ - task:
104
+ type: text-generation
105
+ dataset:
106
+ type: bigcode/humanevalpack
107
+ name: HumanEvalSynthesis(Go)
108
+ metrics:
109
+ - name: pass@1
110
+ type: pass@1
111
+ value: 31.7
112
+ veriefied: false
113
+ - task:
114
+ type: text-generation
115
+ dataset:
116
+ type: bigcode/humanevalpack
117
+ name: HumanEvalSynthesis(C++)
118
+ metrics:
119
+ - name: pass@1
120
+ type: pass@1
121
+ value: 40.2
122
+ veriefied: false
123
+ - task:
124
+ type: text-generation
125
+ dataset:
126
+ type: bigcode/humanevalpack
127
+ name: HumanEvalSynthesis(Rust)
128
+ metrics:
129
+ - name: pass@1
130
+ type: pass@1
131
+ value: 29.3
132
+ veriefied: false
133
+ - task:
134
+ type: text-generation
135
+ dataset:
136
+ type: bigcode/humanevalpack
137
+ name: HumanEvalExplain(Python)
138
+ metrics:
139
+ - name: pass@1
140
+ type: pass@1
141
+ value: 39.6
142
+ veriefied: false
143
+ - task:
144
+ type: text-generation
145
+ dataset:
146
+ type: bigcode/humanevalpack
147
+ name: HumanEvalExplain(JavaScript)
148
+ metrics:
149
+ - name: pass@1
150
+ type: pass@1
151
+ value: 26.8
152
+ veriefied: false
153
+ - task:
154
+ type: text-generation
155
+ dataset:
156
+ type: bigcode/humanevalpack
157
+ name: HumanEvalExplain(Java)
158
+ metrics:
159
+ - name: pass@1
160
+ type: pass@1
161
+ value: 39.0
162
+ veriefied: false
163
+ - task:
164
+ type: text-generation
165
+ dataset:
166
+ type: bigcode/humanevalpack
167
+ name: HumanEvalExplain(Go)
168
+ metrics:
169
+ - name: pass@1
170
+ type: pass@1
171
+ value: 14.0
172
+ veriefied: false
173
+ - task:
174
+ type: text-generation
175
+ dataset:
176
+ type: bigcode/humanevalpack
177
+ name: HumanEvalExplain(C++)
178
+ metrics:
179
+ - name: pass@1
180
+ type: pass@1
181
+ value: 23.8
182
+ veriefied: false
183
+ - task:
184
+ type: text-generation
185
+ dataset:
186
+ type: bigcode/humanevalpack
187
+ name: HumanEvalExplain(Rust)
188
+ metrics:
189
+ - name: pass@1
190
+ type: pass@1
191
+ value: 12.8
192
+ veriefied: false
193
+ - task:
194
+ type: text-generation
195
+ dataset:
196
+ type: bigcode/humanevalpack
197
+ name: HumanEvalFix(Python)
198
+ metrics:
199
+ - name: pass@1
200
+ type: pass@1
201
+ value: 26.8
202
+ veriefied: false
203
+ - task:
204
+ type: text-generation
205
+ dataset:
206
+ type: bigcode/humanevalpack
207
+ name: HumanEvalFix(JavaScript)
208
+ metrics:
209
+ - name: pass@1
210
+ type: pass@1
211
+ value: 28.0
212
+ veriefied: false
213
+ - task:
214
+ type: text-generation
215
+ dataset:
216
+ type: bigcode/humanevalpack
217
+ name: HumanEvalFix(Java)
218
+ metrics:
219
+ - name: pass@1
220
+ type: pass@1
221
+ value: 33.5
222
+ veriefied: false
223
+ - task:
224
+ type: text-generation
225
+ dataset:
226
+ type: bigcode/humanevalpack
227
+ name: HumanEvalFix(Go)
228
+ metrics:
229
+ - name: pass@1
230
+ type: pass@1
231
+ value: 27.4
232
+ veriefied: false
233
+ - task:
234
+ type: text-generation
235
+ dataset:
236
+ type: bigcode/humanevalpack
237
+ name: HumanEvalFix(C++)
238
+ metrics:
239
+ - name: pass@1
240
+ type: pass@1
241
+ value: 31.7
242
+ veriefied: false
243
+ - task:
244
+ type: text-generation
245
+ dataset:
246
+ type: bigcode/humanevalpack
247
+ name: HumanEvalFix(Rust)
248
+ metrics:
249
+ - name: pass@1
250
+ type: pass@1
251
+ value: 16.5
252
+ veriefied: false
253
+ ---
254
+
255
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png)
256
+
257
+ # Granite-3B-Code-Instruct-2K
258
+
259
+ ## Model Summary
260
+ **Granite-3B-Code-Instruct-2K** is a 3B parameter model fine tuned from *Granite-3B-Code-Base-2K* on a combination of **permissively licensed** instruction data to enhance instruction following capabilities including logical reasoning and problem-solving skills.
261
+
262
+ - **Developers:** IBM Research
263
+ - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
264
+ - **Paper:** [Granite Code Models: A Family of Open Foundation Models for Code Intelligence](https://arxiv.org/abs/2405.04324)
265
+ - **Release Date**: May 6th, 2024
266
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
267
+
268
+ ## Usage
269
+ ### Intended use
270
+ The model is designed to respond to coding related instructions and can be used to build coding assistants.
271
+
272
+ <!-- TO DO: Check starcoder2 instruct code example that includes the template https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 -->
273
+
274
+ ### Generation
275
+ This is a simple example of how to use **Granite-3B-Code-Instruct-2K** model.
276
+
277
+ ```python
278
+ import torch
279
+ from transformers import AutoModelForCausalLM, AutoTokenizer
280
+ device = "cuda" # or "cpu"
281
+ model_path = "ibm-granite/granite-3b-code-instruct-2k"
282
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
283
+ # drop device_map if running on CPU
284
+ model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
285
+ model.eval()
286
+ # change input text as desired
287
+ chat = [
288
+ { "role": "user", "content": "Write a code to find the maximum value in a list of numbers." },
289
+ ]
290
+ chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
291
+ # tokenize the text
292
+ input_tokens = tokenizer(chat, return_tensors="pt")
293
+ # transfer tokenized inputs to the device
294
+ for i in input_tokens:
295
+ input_tokens[i] = input_tokens[i].to(device)
296
+ # generate output tokens
297
+ output = model.generate(**input_tokens, max_new_tokens=100)
298
+ # decode output tokens into text
299
+ output = tokenizer.batch_decode(output)
300
+ # loop over the batch to print, in this example the batch size is 1
301
+ for i in output:
302
+ print(i)
303
+ ```
304
+
305
+ <!-- TO DO: Check this part -->
306
+ ## Training Data
307
+ Granite Code Instruct models are trained on the following types of data.
308
+ * Code Commits Datasets: we sourced code commits data from the [CommitPackFT](https://huggingface.co/datasets/bigcode/commitpackft) dataset, a filtered version of the full CommitPack dataset. From CommitPackFT dataset, we only consider data for 92 programming languages. Our inclusion criteria boils down to selecting programming languages common across CommitPackFT and the 116 languages that we considered to pretrain the code-base model (*Granite-3B-Code-Base*).
309
+ * Math Datasets: We consider two high-quality math datasets, [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) and [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA). Due to license issues, we filtered out GSM8K-RFT and Camel-Math from MathInstruct dataset.
310
+ * Code Instruction Datasets: We use [Glaive-Code-Assistant-v3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3), [Glaive-Function-Calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [NL2SQL11](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction) and a small collection of synthetic API calling datasets.
311
+ * Language Instruction Datasets: We include high-quality datasets such as [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) and an open license-filtered version of [Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). We also include a collection of hardcoded prompts to ensure our model generates correct outputs given inquiries about its name or developers.
312
+
313
+ ## Infrastructure
314
+ We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.
315
+
316
+ ## Ethical Considerations and Limitations
317
+ Granite code instruct models are primarily finetuned using instruction-response pairs across a specific set of programming languages. Thus, their performance may be limited with out-of-domain programming languages. In this situation, it is beneficial providing few-shot examples to steer the model's output. Moreover, developers should perform safety testing and target-specific tuning before deploying these models on critical applications. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-3B-Code-Base-2K](https://huggingface.co/ibm-granite/granite-3b-code-base-2k)* model card.
318
+
319
+
320
+