cmh commited on
Commit
4347f8b
·
verified ·
1 Parent(s): 5369b24

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -12,8 +12,7 @@ pipeline_tag: text-generation
12
 
13
  - [Output and embed tensors quantized to q8_0, all other tensors quantized for q4_k.](https://huggingface.co/RobertSinclair)
14
  - [Output and embed tensors quantized to bf16, all other tensors quantized for q5_k, q6_k, q8_0 and q8_0 --pure.](https://huggingface.co/RobertSinclair)
15
- - IMatrix q5_k, q6_k
16
- - BF16
17
  ```
18
  python convert_hf_to_gguf.py --outtype bf16 phi-4 --outfile phi-4.bf16.gguf
19
 
 
12
 
13
  - [Output and embed tensors quantized to q8_0, all other tensors quantized for q4_k.](https://huggingface.co/RobertSinclair)
14
  - [Output and embed tensors quantized to bf16, all other tensors quantized for q5_k, q6_k, q8_0 and q8_0 --pure.](https://huggingface.co/RobertSinclair)
15
+ - BF16 and imatrix q5_k, q6_k available.
 
16
  ```
17
  python convert_hf_to_gguf.py --outtype bf16 phi-4 --outfile phi-4.bf16.gguf
18