generic llama.cpp quants of Minthy/ToriiGate-0.5 | Using llama.cpp release b8720

You can find imatrix, K_L quants (--token-embedding-type & --output-tensor-type are in Q8_0) here:
- https://huggingface.co/SleepVeryHard/ToriiGate-0.5_GGUF

The README mentions that --reasoning off/-rea off should be added as llama.cpp cli arg to workaround an issue where the actual output is just in the reasoning part - I found that to be the case too and disabling reasoning helped. GGUFs have been updated to include the correct chat template

Original model + more info like prompts/formats, examples, stats:
- https://huggingface.co/Minthy/ToriiGate-0.5

Tested the Q4KM variant with Q8 vision encoder:

image

Downloads last month
5,140
GGUF
Model size
5B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for DraconicDragon/ToriiGate-0.5-GGUF

Quantized
(5)
this model