Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -20,6 +20,7 @@ quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type
|
|
| 20 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q8_0
|
| 21 |
and there is also a pure f16 in every directory.
|
| 22 |
|
|
|
|
| 23 |
* [ZeroWw/gemma-2-9b-it-GGUF](https://huggingface.co/ZeroWw/gemma-2-9b-it-GGUF)
|
| 24 |
* [ZeroWw/llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF](https://huggingface.co/ZeroWw/llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF)
|
| 25 |
* [ZeroWw/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF](https://huggingface.co/ZeroWw/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF)
|
|
|
|
| 20 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q8_0
|
| 21 |
and there is also a pure f16 in every directory.
|
| 22 |
|
| 23 |
+
* [ZeroWw/Llama-3-8B-Instruct-Gradient-4194k-GGUF](https://huggingface.co/ZeroWw/Llama-3-8B-Instruct-Gradient-4194k-GGUF)
|
| 24 |
* [ZeroWw/gemma-2-9b-it-GGUF](https://huggingface.co/ZeroWw/gemma-2-9b-it-GGUF)
|
| 25 |
* [ZeroWw/llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF](https://huggingface.co/ZeroWw/llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF)
|
| 26 |
* [ZeroWw/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF](https://huggingface.co/ZeroWw/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF)
|