cmh commited on
Commit
c4fbfcc
·
verified ·
1 Parent(s): 1357581

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -33,7 +33,7 @@ llama-quantize --allow-requantize --pure phi-4.bf16.gguf phi-4.bf16.q8_p.gguf q8
33
  | [phi-4.bf16.q6.im](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q6.im.gguf) | 6.00 bits per weight | 13.2 GB | **15.5 GB** |
34
  | [phi-4.bf16.q8](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q8.gguf) | 8.00 bits per weight | 16.5 GB | **18.5 GB** |
35
  | [phi-4.bf16.q8_p](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q8_p.gguf) | 8.00 bits per weight | 15.6 GB | **18.6 GB** |
36
- | [phi-4.bf16](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.gguf) | 16.00 bits per weight | 29.3 | |
37
 
38
  <sub>*approximate value at 16k context, FP16 cache.<sup>
39
 
 
33
  | [phi-4.bf16.q6.im](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q6.im.gguf) | 6.00 bits per weight | 13.2 GB | **15.5 GB** |
34
  | [phi-4.bf16.q8](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q8.gguf) | 8.00 bits per weight | 16.5 GB | **18.5 GB** |
35
  | [phi-4.bf16.q8_p](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.q8_p.gguf) | 8.00 bits per weight | 15.6 GB | **18.6 GB** |
36
+ | [phi-4.bf16](https://huggingface.co/cmh/test/blob/main/phi-4.bf16.gguf) | 16.00 bits per weight | 29.3 GB | |
37
 
38
  <sub>*approximate value at 16k context, FP16 cache.<sup>
39