Update README.md
#2
by
supriyar
- opened
README.md
CHANGED
|
@@ -17,11 +17,11 @@ base_model:
|
|
| 17 |
pipeline_tag: text-generation
|
| 18 |
---
|
| 19 |
|
| 20 |
-
[Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct)
|
| 21 |
|
| 22 |
# Quantization Recipe
|
| 23 |
|
| 24 |
-
|
| 25 |
```
|
| 26 |
pip install git+https://github.com/huggingface/transformers@main
|
| 27 |
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
|
|
@@ -29,7 +29,7 @@ pip install torch
|
|
| 29 |
pip install accelerate
|
| 30 |
```
|
| 31 |
|
| 32 |
-
|
| 33 |
```
|
| 34 |
import torch
|
| 35 |
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
|
|
@@ -144,7 +144,6 @@ lm_eval --model hf --model_args pretrained=pytorch/Phi-4-mini-instruct-int4wo-hq
|
|
| 144 |
|
| 145 |
# Peak Memory Usage
|
| 146 |
|
| 147 |
-
We can use the following code to get a sense of peak memory usage during inference:
|
| 148 |
|
| 149 |
## Results
|
| 150 |
|
|
@@ -155,6 +154,7 @@ We can use the following code to get a sense of peak memory usage during inferen
|
|
| 155 |
|
| 156 |
|
| 157 |
## Benchmark Peak Memory
|
|
|
|
| 158 |
|
| 159 |
```
|
| 160 |
import torch
|
|
@@ -198,8 +198,8 @@ print(f"Peak Memory Usage: {mem:.02f} GB")
|
|
| 198 |
|
| 199 |
# Model Performance
|
| 200 |
|
| 201 |
-
Our int4wo is only optimized for batch size 1, so
|
| 202 |
-
|
| 203 |
|
| 204 |
|
| 205 |
## Results (A100 machine)
|
|
|
|
| 17 |
pipeline_tag: text-generation
|
| 18 |
---
|
| 19 |
|
| 20 |
+
[Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct) quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) int4 weight only quantization, by PyTorch team. Use it directly or serve using [vLLM](https://docs.vllm.ai/en/latest/). Get 67% VRAM reduction and 12-20% speedup on A100 GPUs.
|
| 21 |
|
| 22 |
# Quantization Recipe
|
| 23 |
|
| 24 |
+
Install the required packages:
|
| 25 |
```
|
| 26 |
pip install git+https://github.com/huggingface/transformers@main
|
| 27 |
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
|
|
|
|
| 29 |
pip install accelerate
|
| 30 |
```
|
| 31 |
|
| 32 |
+
Use the following code to get the quantized model:
|
| 33 |
```
|
| 34 |
import torch
|
| 35 |
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
|
|
|
|
| 144 |
|
| 145 |
# Peak Memory Usage
|
| 146 |
|
|
|
|
| 147 |
|
| 148 |
## Results
|
| 149 |
|
|
|
|
| 154 |
|
| 155 |
|
| 156 |
## Benchmark Peak Memory
|
| 157 |
+
We can use the following code to get a sense of peak memory usage during inference:
|
| 158 |
|
| 159 |
```
|
| 160 |
import torch
|
|
|
|
| 198 |
|
| 199 |
# Model Performance
|
| 200 |
|
| 201 |
+
Our int4wo is only optimized for batch size 1, so expect some slowdown with larger batch sizes, we expect this to be used in local server deployment for single or a few users
|
| 202 |
+
where the decode tokens per second will matters more than the time to first token.
|
| 203 |
|
| 204 |
|
| 205 |
## Results (A100 machine)
|