cmh commited on
Commit
fbacc61
·
verified ·
1 Parent(s): 30b27f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -14,7 +14,7 @@ pipeline_tag: text-generation
14
  - For q5_k, q6_k, q8_0 and q8_0 --pure: output and embed tensors quantized to bf16, all other tensors quantized for q5_k, q6_k, q8_0 and q8_0 --pure.
15
  - BF16 and imatrix for q5_k, q6_k available.
16
 
17
- | | Quant type | File Size | ~Vram*|
18
  | -------- | ---------- | --------- | -------- |
19
  | [phi-4.q8.q4](https://huggingface.co/cmh/test/blob/main/phi-4.q8.q4.gguf) | 4 bits per weight | 9.43 GB | **12.9 GB** |
20
  | [phi-4.bf16.q5](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q5.gguf) | 5 bits per weight | 11.9 GB | **14.2 GB** |
@@ -23,7 +23,7 @@ pipeline_tag: text-generation
23
  | [phi-4.bf16.q6.im](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q6.im.gguf) | 6 bits per weight | 13.2 GB | **15.5 GB** |
24
  | [phi-4.bf16.q8](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q8.gguf) | 8 bits per weight | 16.5 GB | **18.5 GB** |
25
  | [phi-4.bf16.q8p](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q8p.gguf) | 8 bits per weight | 15.6 GB | **18.6 GB** |
26
- | [phi-4.bf16](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.gguf) | 16 bits per weight | 29.3 GB |
27
 
28
  <sub>*approximate value at **16k context, FP16 cache**.<sup>
29
 
 
14
  - For q5_k, q6_k, q8_0 and q8_0 --pure: output and embed tensors quantized to bf16, all other tensors quantized for q5_k, q6_k, q8_0 and q8_0 --pure.
15
  - BF16 and imatrix for q5_k, q6_k available.
16
 
17
+ | | Quant type | File Size | Vram*|
18
  | -------- | ---------- | --------- | -------- |
19
  | [phi-4.q8.q4](https://huggingface.co/cmh/test/blob/main/phi-4.q8.q4.gguf) | 4 bits per weight | 9.43 GB | **12.9 GB** |
20
  | [phi-4.bf16.q5](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q5.gguf) | 5 bits per weight | 11.9 GB | **14.2 GB** |
 
23
  | [phi-4.bf16.q6.im](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q6.im.gguf) | 6 bits per weight | 13.2 GB | **15.5 GB** |
24
  | [phi-4.bf16.q8](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q8.gguf) | 8 bits per weight | 16.5 GB | **18.5 GB** |
25
  | [phi-4.bf16.q8p](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q8p.gguf) | 8 bits per weight | 15.6 GB | **18.6 GB** |
26
+ | [phi-4.bf16](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.gguf) | 16 bits per weight | 29.3 GB | tbd |
27
 
28
  <sub>*approximate value at **16k context, FP16 cache**.<sup>
29