|
|
--- |
|
|
tags: |
|
|
- gguf |
|
|
- llama.cpp |
|
|
- unsloth |
|
|
- vision-language-model |
|
|
base_model: |
|
|
- google/gemma-3-4b-it |
|
|
--- |
|
|
|
|
|
# gem3COMPILAR : GGUF |
|
|
|
|
|
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). |
|
|
|
|
|
**Example usage**: |
|
|
- For text only LLMs: `./llama.cpp/llama-cli -hf nullzero-live/gem3COMPILAR --jinja` |
|
|
- For multimodal models: `./llama.cpp/llama-mtmd-cli -hf nullzero-live/gem3COMPILAR --jinja` |
|
|
|
|
|
## Available Model files: |
|
|
- `gemma-3-4b-it.Q8_0.gguf` |
|
|
- `gemma-3-4b-it.F16-mmproj.gguf` |
|
|
|
|
|
## ⚠️ Ollama Note for Vision Models |
|
|
**Important:** Ollama currently does not support separate mmproj files for vision models. |
|
|
|
|
|
To create an Ollama model from this vision model: |
|
|
1. Place the `Modelfile` in the same directory as the finetuned bf16 merged model |
|
|
3. Run: `ollama create model_name -f ./Modelfile` |
|
|
(Replace `model_name` with your desired name) |
|
|
|
|
|
This will create a unified bf16 model that Ollama can use. |
|
|
This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) |
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |