--- tags: - gguf - llama.cpp - unsloth --- # Quen32 : GGUF This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). **Example usage**: - For text only LLMs: `llama-cli -hf AlSamCur123/Quen32 --jinja` - For multimodal models: `llama-mtmd-cli -hf AlSamCur123/Quen32 --jinja` ## Available Model files: - `qwq-32b.Q4_0.gguf` - `qwq-32b.BF16-00002-of-00002.gguf` ## Ollama An Ollama Modelfile is included for easy deployment. This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) [](https://github.com/unslothai/unsloth)