| --- |
| tags: |
| - gguf |
| - llama.cpp |
| - unsloth |
|
|
| --- |
| |
| # Quen32 : GGUF |
|
|
| This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). |
|
|
| **Example usage**: |
| - For text only LLMs: `llama-cli -hf AlSamCur123/Quen32 --jinja` |
| - For multimodal models: `llama-mtmd-cli -hf AlSamCur123/Quen32 --jinja` |
|
|
| ## Available Model files: |
| - `qwq-32b.Q4_0.gguf` |
| - `qwq-32b.BF16-00002-of-00002.gguf` |
|
|
| ## Ollama |
| An Ollama Modelfile is included for easy deployment. |
| This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) |
| [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|