Update README.md
Browse files
README.md
CHANGED
|
@@ -9,6 +9,8 @@ This is 2-bit quantization of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://hug
|
|
| 9 |
## Model loading
|
| 10 |
Please follow the instruction of [QuIP-for-all](https://github.com/chu-tianxiang/QuIP-for-all) for usage.
|
| 11 |
|
|
|
|
|
|
|
| 12 |
## Perplexity
|
| 13 |
Measured at Wikitext with 4096 context length
|
| 14 |
| fp16 | 2-bit |
|
|
@@ -16,4 +18,5 @@ Measured at Wikitext with 4096 context length
|
|
| 16 |
| 3.8825 | 5.2799 |
|
| 17 |
|
| 18 |
## Speed
|
| 19 |
-
|
|
|
|
|
|
| 9 |
## Model loading
|
| 10 |
Please follow the instruction of [QuIP-for-all](https://github.com/chu-tianxiang/QuIP-for-all) for usage.
|
| 11 |
|
| 12 |
+
As an alternative, you can use [vLLM branch](https://github.com/chu-tianxiang/vllm-gptq/tree/quip_gemv) for faster inference. QuIP has to launch like 5 kernels for each linear layer, so it's very helpful for vLLM to use cuda-graph to reduce launching overhead. BTW, If you have problem installing fast-hadamard-transform from pip, you can also install it from [source](https://github.com/Dao-AILab/fast-hadamard-transform)
|
| 13 |
+
|
| 14 |
## Perplexity
|
| 15 |
Measured at Wikitext with 4096 context length
|
| 16 |
| fp16 | 2-bit |
|
|
|
|
| 18 |
| 3.8825 | 5.2799 |
|
| 19 |
|
| 20 |
## Speed
|
| 21 |
+
Measured with `examples/benchmark_latency.py` script at vLLM repo.
|
| 22 |
+
At batch size = 1, it generates at 16.3 tokens/s with single 3090.
|