| | --- |
| | tags: |
| | - gguf |
| | - llama.cpp |
| | - unsloth |
| | - vision-language-model |
| | --- |
| | |
| | # gemma_test : GGUF |
| | |
| | This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). |
| | |
| | **Example usage**: |
| | - For text only LLMs: `./llama.cpp/llama-cli -hf adityaharshef3/gemma_test --jinja` |
| | - For multimodal models: `./llama.cpp/llama-mtmd-cli -hf adityaharshef3/gemma_test --jinja` |
| |
|
| | ## Available Model files: |
| | - `gemma-3-4b-it.Q4_K_M.gguf` |
| | - `gemma-3-4b-it.F16-mmproj.gguf` |
| |
|
| | ## ⚠️ Ollama Note for Vision Models |
| | **Important:** Ollama currently does not support separate mmproj files for vision models. |
| |
|
| | To create an Ollama model from this vision model: |
| | 1. Place the `Modelfile` in the same directory as the finetuned bf16 merged model |
| | 3. Run: `ollama create model_name -f ./Modelfile` |
| | (Replace `model_name` with your desired name) |
| |
|
| | This will create a unified bf16 model that Ollama can use. |
| | This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) |
| | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
| |
|