nullzero-live commited on
Commit
612245b
·
verified ·
1 Parent(s): b709dd2

Add README

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - gguf
4
+ - llama.cpp
5
+ - unsloth
6
+ - vision-language-model
7
+ ---
8
+
9
+ # gem3COMPILAR : GGUF
10
+
11
+ This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
12
+
13
+ **Example usage**:
14
+ - For text only LLMs: `./llama.cpp/llama-cli -hf nullzero-live/gem3COMPILAR --jinja`
15
+ - For multimodal models: `./llama.cpp/llama-mtmd-cli -hf nullzero-live/gem3COMPILAR --jinja`
16
+
17
+ ## Available Model files:
18
+ - `gemma-3-4b-it.Q8_0.gguf`
19
+ - `gemma-3-4b-it.F16-mmproj.gguf`
20
+
21
+ ## ⚠️ Ollama Note for Vision Models
22
+ **Important:** Ollama currently does not support separate mmproj files for vision models.
23
+
24
+ To create an Ollama model from this vision model:
25
+ 1. Place the `Modelfile` in the same directory as the finetuned bf16 merged model
26
+ 3. Run: `ollama create model_name -f ./Modelfile`
27
+ (Replace `model_name` with your desired name)
28
+
29
+ This will create a unified bf16 model that Ollama can use.
30
+ This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth)
31
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)