--- tags: - gguf - llama.cpp - unsloth - vision-language-model --- # pd_pull_5000_model_16bit_9435_gguff : GGUF This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). **Example usage**: - For text only LLMs: `./llama.cpp/llama-cli -hf happycode2708/pd_pull_5000_model_16bit_9435_gguff --jinja` - For multimodal models: `./llama.cpp/llama-mtmd-cli -hf happycode2708/pd_pull_5000_model_16bit_9435_gguff --jinja` ## Available Model files: - `Qwen3-VL-8B-Instruct.Q8_0.gguf` - `Qwen3-VL-8B-Instruct.Q4_K_M.gguf` - `Qwen3-VL-8B-Instruct.BF16-mmproj.gguf` This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) [](https://github.com/unslothai/unsloth)