Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ pipeline_tag: text-generation
|
|
| 13 |
[ExLlamaV2 is an inference library for running local LLMs on modern consumer GPUs.](https://github.com/turboderp-org/exllamav2)
|
| 14 |
|
| 15 |
|
| 16 |
-
| Filename | Quant type | File Size | Vram*|
|
| 17 |
| -------- | ---------- | --------- | -------- |
|
| 18 |
| [phi-4_hb8_3bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_3bpw) | 3.00 bits per weight | 6.66 GB | **10,3 GB** |
|
| 19 |
| [phi-4_hb8_4bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_4bpw) | 4.00 bits per weight | 8.36 GB | **11,9 GB** |
|
|
@@ -22,7 +22,7 @@ pipeline_tag: text-generation
|
|
| 22 |
| [phi-4_hb8_7bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_7bpw) | 7.00 bits per weight | 13.5 GB | **16,7 GB** |
|
| 23 |
| [phi-4_hb8_8bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_8bpw) | 8.00 bits per weight | 15.2 GB | **18,2 GB** |
|
| 24 |
|
| 25 |
-
<sub>*at 16k context, FP16 cache.<sup>
|
| 26 |
|
| 27 |
---------------------------------------------
|
| 28 |
|
|
|
|
| 13 |
[ExLlamaV2 is an inference library for running local LLMs on modern consumer GPUs.](https://github.com/turboderp-org/exllamav2)
|
| 14 |
|
| 15 |
|
| 16 |
+
| Filename | Quant type | File Size | ~Vram*|
|
| 17 |
| -------- | ---------- | --------- | -------- |
|
| 18 |
| [phi-4_hb8_3bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_3bpw) | 3.00 bits per weight | 6.66 GB | **10,3 GB** |
|
| 19 |
| [phi-4_hb8_4bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_4bpw) | 4.00 bits per weight | 8.36 GB | **11,9 GB** |
|
|
|
|
| 22 |
| [phi-4_hb8_7bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_7bpw) | 7.00 bits per weight | 13.5 GB | **16,7 GB** |
|
| 23 |
| [phi-4_hb8_8bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_8bpw) | 8.00 bits per weight | 15.2 GB | **18,2 GB** |
|
| 24 |
|
| 25 |
+
<sub>*approximate value at 16k context, FP16 cache.<sup>
|
| 26 |
|
| 27 |
---------------------------------------------
|
| 28 |
|