krishnateja95 commited on
Commit
436ead6
·
verified ·
1 Parent(s): f1657ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +219 -3
README.md CHANGED
@@ -1,3 +1,219 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - fp8
6
+ - quantized
7
+ - llm-compressor
8
+ - compressed-tensors
9
+ - red hat
10
+ base_model:
11
+ - ibm-granite/granite-4.0-h-tiny
12
+ ---
13
+
14
+
15
+ # Granite-4.0-h-tiny
16
+
17
+ ## Model Overview
18
+ - **Model Architecture:** GraniteMoeHybridForCausalLM
19
+ - **Input:** Text
20
+ - **Output:** Text
21
+ - **Model Optimizations:**
22
+ - **Weight quantization:** FP8
23
+ - **Activation quantization:** FP8
24
+ - **Release Date:**
25
+ - **Version:** 1.0
26
+ - **Model Developers:**: Red Hat
27
+
28
+ Quantized version of [ibm-granite/granite-4.0-h-tiny](https://huggingface.co/ibm-granite/granite-4.0-h-tiny).
29
+
30
+ ### Model Optimizations
31
+
32
+ This model was obtained by quantizing the weights and activations of [ibm-granite/granite-4.0-h-tiny](https://huggingface.co/ibm-granite/granite-4.0-h-tiny) to FP8 data type.
33
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
34
+ Only the weights and activations of the linear operators within transformers blocks of the language model are quantized.
35
+
36
+ ## Deployment
37
+
38
+ ### Use with vLLM
39
+
40
+ 1. Install vLLM from main:
41
+ ```
42
+ uv pip install -U git+https://github.com/vllm-project/vllm.git \
43
+ --extra-index-url https://wheels.vllm.ai/nightly \
44
+ --no-deps \
45
+ --no-cache
46
+
47
+
48
+ uv pip install compressed-tensors==0.12.3a20251114 --no-cache
49
+ uv pip install --upgrade torchvision --break-system-packages --no-cache
50
+ uv pip install cloudpickle msgspec zmq blake3 cachetools prometheus_client fastapi openai openai_harmony pybase64 llguidance diskcache xgrammar lm-format-enforcer partial-json-parser cbor2 einops gguf numba --no-cache
51
+ ```
52
+
53
+ 2. Initialize vLLM server:
54
+ ```
55
+ vllm serve RedHatAI/granite-4.0-h-tiny-FP8-dynamic --tensor_parallel_size 1
56
+ ```
57
+
58
+ 3. Send requests to the server:
59
+
60
+ ```python
61
+ from openai import OpenAI
62
+
63
+ # Modify OpenAI's API key and API base to use vLLM's API server.
64
+ openai_api_key = "EMPTY"
65
+ openai_api_base = "http://<your-server-host>:8000/v1"
66
+
67
+ client = OpenAI(
68
+ api_key=openai_api_key,
69
+ base_url=openai_api_base,
70
+ )
71
+
72
+ model = "RedHatAI/granite-4.0-h-tiny-FP8-dynamic"
73
+
74
+ messages = [
75
+ {"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
76
+ ]
77
+
78
+
79
+ outputs = client.chat.completions.create(
80
+ model=model,
81
+ messages=messages,
82
+ )
83
+
84
+ generated_text = outputs.choices[0].message.content
85
+ print(generated_text)
86
+ ```
87
+
88
+ ## Creation
89
+
90
+ This model was quantized using the [llm-compressor](https://github.com/vllm-project/llm-compressor) library as shown below.
91
+
92
+
93
+ <details>
94
+ <summary>Creation details</summary>
95
+
96
+ Install specific llm-compression version:
97
+ ```
98
+ uv pip install git+https://github.com/vllm-project/llm-compressor.git@refs/pull/2001/head --no-cache
99
+ uv pip install --upgrade torchvision --break-system-packages --no-cache
100
+ ```
101
+
102
+
103
+ ```python
104
+ from transformers import AutoModelForCausalLM, AutoTokenizer
105
+
106
+ from llmcompressor import oneshot
107
+ from llmcompressor.modifiers.quantization import QuantizationModifier
108
+ from llmcompressor.utils import dispatch_for_generation
109
+ from llmcompressor.modeling import replace_modules_for_calibration
110
+ from llmcompressor.modeling.granite4 import pack_3d_experts
111
+
112
+
113
+ MODEL_ID = "ibm-granite/granite-4.0-h-tiny"
114
+
115
+ model = AutoModelForCausalLM.from_pretrained(MODEL_ID, torch_dtype="auto")
116
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
117
+
118
+ model = replace_modules_for_calibration(model)
119
+
120
+ ignore_lay = ["lm_head", "re:.*block_sparse_moe.router"]
121
+
122
+ recipe = QuantizationModifier(
123
+ targets=["Linear"],
124
+ scheme="FP8_DYNAMIC",
125
+ ignore=ignore_lay,
126
+ )
127
+
128
+ oneshot(model=model, recipe=recipe)
129
+
130
+ print("========== SAMPLE GENERATION ==============")
131
+ dispatch_for_generation(model)
132
+ input_ids = tokenizer(
133
+ "Describe Large Language Model", return_tensors="pt"
134
+ ).input_ids.to(model.device)
135
+ output = model.generate(input_ids, max_new_tokens=35)
136
+ print(tokenizer.decode(output[0]))
137
+ print("==========================================")
138
+
139
+ SAVE_DIR = MODEL_ID.rstrip("/").split("/")[-1] + "-FP8-dynamic"
140
+ print(f"Saving to {SAVE_DIR}")
141
+
142
+ model.save_pretrained(SAVE_DIR)
143
+ tokenizer.save_pretrained(SAVE_DIR)
144
+ pack_3d_experts(SAVE_DIR)
145
+ ```
146
+ </details>
147
+
148
+
149
+ ## Evaluation
150
+
151
+
152
+ The model was evaluated on the OpenLLM leaderboard task, using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
153
+ [vLLM](https://docs.vllm.ai/en/stable/) was used for all evaluations.
154
+
155
+ <details>
156
+ <summary>Evaluation details</summary>
157
+
158
+ Install vLLM from main:
159
+ ```
160
+ uv pip install -U git+https://github.com/vllm-project/vllm.git \
161
+ --extra-index-url https://wheels.vllm.ai/nightly \
162
+ --no-deps \
163
+ --no-cache
164
+
165
+
166
+ uv pip install compressed-tensors==0.12.3a20251114 --no-cache
167
+ uv pip install --upgrade torchvision --break-system-packages --no-cache
168
+ uv pip install cloudpickle msgspec zmq blake3 cachetools prometheus_client fastapi openai openai_harmony pybase64 llguidance diskcache xgrammar lm-format-enforcer partial-json-parser cbor2 einops gguf numba --no-cache
169
+ ```
170
+
171
+ **Openllm V1**
172
+ ```
173
+ lm_eval \
174
+ --model vllm \
175
+ --model_args pretrained="RedHatAI/granite-4.0-h-tiny-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=16384,tensor_parallel_size=1,gpu_memory_utilization=0.9,enable_chunked_prefill=True,trust_remote_code=True \
176
+ --tasks openllm \
177
+ --write_out \
178
+ --batch_size auto \
179
+ --show_config
180
+ ```
181
+
182
+
183
+ **Openllm V2**
184
+ ```
185
+ lm_eval \
186
+ --model vllm \
187
+ --model_args pretrained="RedHatAI/granite-4.0-h-tiny-FP8-dynamic",dtype=auto,add_bos_token=False,max_model_len=16384,tensor_parallel_size=1,gpu_memory_utilization=0.7,disable_log_stats=True,enable_chunked_prefill=True,trust_remote_code=True \
188
+ --tasks leaderboard \
189
+ --apply_chat_template \
190
+ --fewshot_as_multiturn \
191
+ --write_out \
192
+ --batch_size auto \
193
+ --show_config
194
+ ```
195
+
196
+
197
+ **Coding Benchmarks**
198
+
199
+ ```
200
+ evalplus.evaluate --model "RedHatAI/granite-4.0-h-tiny-FP8-dynamic" \
201
+ --dataset "humaneval" \
202
+ --backend vllm \
203
+ --tp 1 \
204
+ --greedy
205
+
206
+ evalplus.evaluate --model "RedHatAI/granite-4.0-h-tiny-FP8-dynamic" \
207
+ --dataset "mbpp" \
208
+ --backend vllm \
209
+ --tp 1 \
210
+ --greedy
211
+
212
+ ```
213
+
214
+ </details>
215
+
216
+
217
+
218
+ ### Accuracy Comparison
219
+