generic llama.cpp quants of Minthy/ToriiGate-0.5 | Using llama.cpp release b8720
You can find imatrix, K_L quants (--token-embedding-type & --output-tensor-type are in Q8_0) here:
- https://huggingface.co/SleepVeryHard/ToriiGate-0.5_GGUF
The README mentions that
GGUFs have been updated to include the correct chat template--reasoning off/-rea off should be added as llama.cpp cli arg to workaround an issue where the actual output is just in the reasoning part - I found that to be the case too and disabling reasoning helped.
Original model + more info like prompts/formats, examples, stats:
- https://huggingface.co/Minthy/ToriiGate-0.5
Tested the Q4KM variant with Q8 vision encoder:
- Downloads last month
- 5,140
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
