Update README.md
Browse files
README.md
CHANGED
|
@@ -12,8 +12,7 @@ pipeline_tag: text-generation
|
|
| 12 |
|
| 13 |
- [Output and embed tensors quantized to q8_0, all other tensors quantized for q4_k.](https://huggingface.co/RobertSinclair)
|
| 14 |
- [Output and embed tensors quantized to bf16, all other tensors quantized for q5_k, q6_k, q8_0 and q8_0 --pure.](https://huggingface.co/RobertSinclair)
|
| 15 |
-
-
|
| 16 |
-
- BF16
|
| 17 |
```
|
| 18 |
python convert_hf_to_gguf.py --outtype bf16 phi-4 --outfile phi-4.bf16.gguf
|
| 19 |
|
|
|
|
| 12 |
|
| 13 |
- [Output and embed tensors quantized to q8_0, all other tensors quantized for q4_k.](https://huggingface.co/RobertSinclair)
|
| 14 |
- [Output and embed tensors quantized to bf16, all other tensors quantized for q5_k, q6_k, q8_0 and q8_0 --pure.](https://huggingface.co/RobertSinclair)
|
| 15 |
+
- BF16 and imatrix q5_k, q6_k available.
|
|
|
|
| 16 |
```
|
| 17 |
python convert_hf_to_gguf.py --outtype bf16 phi-4 --outfile phi-4.bf16.gguf
|
| 18 |
|